venue
stringclasses 2
values | paper_content
stringlengths 7.54k
83.7k
| prompt
stringlengths 161
2.5k
| format
stringclasses 5
values | review
stringlengths 293
9.84k
|
---|---|---|---|---|
ICLR | Title
Efficient, probabilistic analysis of combinatorial neural codes
Abstract
Artificial and biological neural networks (ANNs and BNNs) can encode inputs in the form of combinations of individual neurons’ activities. These combinatorial neural codes present a computational challenge for direct and efficient analysis due to their high dimensionality and often large volumes of data. Here we improve the computational complexity – from factorial to quadratic time – of direct algebraic methods previously applied to small examples and apply them to large neural codes generated by experiments. These methods provide a novel and efficient way of probing algebraic, geometric, and topological characteristics of combinatorial neural codes and provide insights into how such characteristics are related to learning and experience in neural networks. We introduce a procedure to perform hypothesis testing on the intrinsic features of neural codes using information geometry. We then apply these methods to neural activities from an ANN for image classification and a BNN for 2D navigation to, without observing any inputs or outputs, estimate the structure and dimensionality of the stimulus or task space. Additionally, we demonstrate how an ANN varies its internal representations across network depth and during learning.
1 INTRODUCTION
To understand the world around them, organisms’ biological neural networks (BNNs) encode information about their environment in the dynamics of spikes varying over time and space. Artificial neural networks (ANNs) use similar principles, except instead of transmitting spikes they usually transmit a real-valued number in the range of [0, 1] and their dynamics are typically advanced in a step-wise, discrete manner. Both BNNs and ANNs adjust their internal structures, e.g., connection strengths between neurons, to improve their performance in learned tasks. This leads to encoding input data into internal representations, which they then transform into task-relevant outputs, e.g., motor commands. Combinatorial neural coding schemes, i.e., encoding information in the collective activity of neurons (also called ‘population coding’), is widespread in BNNs (Averbeck et al., 2006; Osborne et al., 2008; Schneidman et al., 2011; Froudarakis et al., 2014; Bush et al., 2015; Stevens, 2018; Beyeler et al., 2019; Villafranca-Faus et al., 2021; Burns et al., 2022; Hannagan et al., 2021) and long-utilized in ANNs, e.g., in associative memory networks (Little, 1974; Hopfield, 1982; Tsodyks & Feigel'man, 1988; Adachi & Aihara, 1997; Krotov & Hopfield, 2016).
Advances in mathematical neuroscience (Curto & Itskov, 2008; Curto et al., 2019) has led to the development of analyses designed to understand the combinatorial properties of neural codes and their mapping to the stimulus space. Such analyses were initially inspired by the combinatorial coding seen in place cells (Moser et al., 2008), where neurons represent physical space in the form of ensemble and individual activity (Brown & Alex, 2006; Fenton et al., 2008). Place fields, the physical spatial areas encoded by place cells, can be arranged such that they span multiple spatial dimensions, e.g., 3D navigation space in bats (Yartsev & Ulanovsky, 2013). They can also encode for ‘social place’ (Omer et al., 2018), the location of conspecifics. Just as these spatial and social dimensions of place (external stimuli) may be represented by combinatorial coding, so too may other dimensions in external stimuli, such as in vision (Fujii & Ito, 1996; Panzeri & Schultz, 2001; Averbeck et al., 2006; Froudarakis et al., 2014; Fetz, 1997).
In place cells, the term receptive field (RF) or place field may intuitively be thought of as a physical place. In the context of vision, for example, we may think of RFs less spatially and more abstractly as
representing stimuli features or dimensions along which neurons may respond more or less strongly, e.g., features such as orientation, spatial frequency, or motion (Niell & Stryker, 2008; Juavinett & Callaway, 2015). Two neurons which become activated simultaneously upon visual stimuli moving to the right of the visual field may be said to share the RF of general rightward motion, for example. We may also think of RFs even more abstractly as dimensions in general conceptual spaces, such as the reward–action space of a task (Constantinescu et al., 2016), visual attributes of characters or icons (Aronov et al., 2017), olfactory space (Bao et al., 2019), the relative positions people occupy in a social hierarchy (Park et al., 2021), and even cognition and behaviour more generally (Bellmund et al., 2018).
In the method described in Curto et al. (2019), tools from algebra are used to extract the combinatorial structure of neural codes. The types of neural codes under study are sets of binary vectors C ⊂ Fn2 , where there are n neurons in states 0 (off) and 1 (on). The ultimate structure of this method is the canonical form of a neural code CF (C). The canonical form may be analysed topologically, geometrically, and algebraically to infer features such as the potential convexity of the receptive fields (RFs) which gave rise to the code, or the minimum number of dimensions those RFs must span in real space. Such analyses are possible because CF (C) captures the minimal essential set of combinatorial descriptions which describe all existing RF relationships implied by C. RF relationships (whether and how RFs intersect or are contained by one-another in stimulus space) are considered to be implied by C by assuming that if two neurons become activated or spike simultaneously, they likely receive common external input in the form of common stimulus features or common RFs. Given sufficient exploration of the stimulus space, it is possible to infer topological features of the global stimulus space by only observing C (Curto & Itskov, 2008; Mulas & Tran, 2020). To the best of our knowledge, these methods have only been developed and used for small examples of BNNs. Here we apply them to larger BNNs and to ANNs (by considering the co-activation of neurons during single stimulus trials).
Despite the power and broad applicability of these methods (Curto & Itskov, 2008; Curto et al., 2019; Mulas & Tran, 2020), two major problems impede their usefulness: (1) the computational time complexity of the algorithms to generate CF (C) is factorial in the number of codewords O(nm!)1, limiting their use in large, real-world datasets; and (2) there is no tolerance for noise in C, nor consideration given towards the stochastic or probabilistic natures of neural firing. We address these problems by: (1) introducing a novel method for improving the time complexity to quadratic in the number of neurons O(n2) by computing the generators of CF (C) and using these to answer the same questions; and (2) using information geometry (Nakahara & Amari, 2002; Amari, 2016) to perform hypothesis testing on the presence/absence of inferred geometric or topological properties of the stimulus or task space. As a proof of concept, we apply these new methods to data from a simulated BNN for spatial navigation and a simple ANN for visual classification, both of which may contain thousands of codewords.
2 PRELIMINARIES
Before describing our own technical developments and improvements, we first outline some of the key mathematical concepts and objects which we use and expand upon in later sections. For more detailed information, we recommend referring to Curto & Itskov (2008); Curto et al. (2019).
2.1 COMBINATORIAL NEURAL CODES
Let F2 = {0, 1}, [n] = {1, 2, . . . , n}, and Fn2 = {a1a2 · · · an|ai ∈ F2, for all i}. A codeword is an element of Fn2 . For a given codeword c = c1c2 · · · cn,, we define its support as supp(c) = {i ∈ [n]|ci ̸= 0}, which can be interpreted as the unique set of active neurons in a discrete time bin which correspond to that codeword. A combinatorial neural code, or a code, is a subset of Fn2 . The support of a code C is defined as supp(C) = {S ⊆ [n]|S = supp(c) for some c ∈ C}, which can be interpreted as all sets of active neurons represented by all corresponding codewords in C.
Let ∆ be a subset of 2[n]. The subset ∆ is an abstract simplicial complex if for any S ∈ ∆, the condition S′ ⊆ S gives S′ ∈ ∆, for any S′ ⊆ S. In other words, ∆ ⊆ 2[n] is an abstract simplicial
1n is the number of neurons and m is the number of codewords. In most datasets of interest n ≪ m.
complex if it is closed under inclusion. So, the simplicial complex for a code C can be defined as
∆(C) = {S ⊆ [n]|S ⊆ supp(c), for some c ∈ C} .
A set S in a simplicial complex ∆ is referred to as an (|S| − 1)-simplex. For instance, a set with cardinality 1 is called 0-simplex (geometrically, a point), a set with cardinality 2 is called a 1-simplex (geometrically, an edge), and so on. Let S be an m-simplex in ∆. Any S′ ⊆ S is called a face of S.
2.2 SIMPLICIAL COMPLEXES AND TOPOLOGY
Let C ⊆ Fn2 be a code and ∆(C) be the corresponding simplicial complex of C. From now on, we will use ∆ to denote the corresponding simplicial complex of a code C. Define ∆m as a set of m-simplices in ∆. Define
Cm = { ∑ S∈∆m αSS | αS ∈ F2,∀S ∈ ∆m } .
The setCm forms a vector space over F2 whose basis elements are all them-simplicies in ∆m.Now, define the chain complex C∗(∆,F2) to be the sequence {Cm}m≥0 . For any m ≥ 1, define a linear transformation ∂m : Cm → Cm−1, where for any σ ∈ ∆m, ∂m(σ) = ∑m i=0 σ
i, with σi ∈ ∆m−1 as a face of σ, for all i = 0, . . . ,m. Moreover, the map ∂m can be extended linearly to all elements in Cm as follows
∂m ( ∑ S∈∆m αSS ) = ∑ S∈∆m αS∂m(S).
Define the m-th mod-2 homology group of ∆ as
Hm(∆,F2) = Ker (∂m)
Im (∂m+1)
for all m ≥ 1 and H0(∆,F2) =
C0 Im (∂1) .
Note thatHm(∆,F2) is also a vector space over F2, for allm ≥ 0. So, the mod-2m-th Betti number βm(∆) of a simplicial complex ∆ is the dimension ofHm(∆,F2). The βm(∆,F2) gives the number of m-dimensional holes in the geometric realisation of ∆.
2.3 CANONICAL FORM
Let σ and τ be subsets of [n],where σ∩τ = ∅. The polynomial of the form ∏ i∈σ xi ∏ j∈τ (1−xj) ∈ F2[x1m. . . , xn] is called a pseudo-monomial. In a given ideal J ⊆ F2[x1, . . . , xn], a pseudomonomial f in J is said to be minimal if there is no pseudo-monomial g in J with deg(g) < deg(f) such that f = gh for some h ∈ F2[x1, . . . , xn]. For a given code C ⊆ Fn2 , we can define a neural ideal related to C as JC = ⟨ρc′ |c′Fn2 − C⟩, where ρc′ is a pseudo-monomial of the form∏ i∈supp(c′) xi ∏ j ̸∈supp(c′) (1− xj) . A set of all minimal pseudo-monomials in JC , denoted by CF (JC) or simply CF (C), is called the canonical form of JC . Moreover, it can be shown that JC = ⟨CF (C)⟩. Therefore, the canonical form CF (C) gives a simple way to infer the RF relationships implied by all codewords in C. One way to calculate the CF (C) is by using a recursive algorithm described in Curto et al. (2019). For a code C = {c1, . . . , c|C|}, the aforementioned algorithm works by constructing canonical forms CF (∅), CF ({c1}) , CF ({c1, c2}) , . . . , CF (C) , respectively. In each stage, the algorithm evaluates polynomials, checks divisibility conditions, and adds or removes polynomials from a related canonical form.
3 METHODS
Our main methodological contributions are: (1) improving the computational complexity of the analyses relying on computing CF (C) (see Algorithm 1); and (2) using information geometry to identify whether identified algebraic or topological features are statistically significant.
3.1 COMPUTING AND ANALYSING THE CANONICAL FORM’S GENERATORS
We may perform the same analyses as in Curto et al. (2019) in quadratic time by using Algorithm 1 to construct the generators of CF (C) rather than constructing CF (C) itself (as in Algorithm 2 of Curto et al. (2019)). Illustrative of this efficiency, representative experimental data with 25 neurons and 46 codewords took < 1 second to analyse on a high-end desktop PC (Intel i7 CPU and 64GB of memory), compared to 2 minutes 57 seconds using Algorithm 2 from Curto et al. (2019).
Algorithm 1 Algorithm for computing generators of CF (C) Input:
M = C ⊂ Fn2 as a patterns× neurons matrix Initialize:
D ← empty list ▷ Stores the monomials. P ← empty list ▷ Stores the mixed monomial constructor tuples (σ,τ ). B ← empty list ▷ Stores the mixed monomial constructor tuples (τ ,σ).
for each column i of M do for each column j of M do s← ∑ k(i · j)k
if s < 1 then ▷ The pair i, j have disjoint receptive fields. append {i, j} to D else j′ ← j − 1 b← ∑ k=1(i · j′)k
if b = 0 then ▷ The receptive field of j is a subset of receptive field of i. append (i, j) to P append (j, i) to B
end if end if
end for end for
Generating desired elements of JC is then straightforward: monomials are supersets of disjoint pairs (from D) where each pair set has one element shared with at least one other disjoint pair set in the superset; mixed monomials are all possible combinations of first (σ set) and second (τ set) elements in the tuples of P (or vice-versa for B) – we do not allocate all of these elements but instead store the set constructors; and the negative monomial appears if and only if the all 1s codeword exists (which involves a simple summing check on columns of M ).
3.2 INFORMATION GEOMETRY FOR COMBINATORIAL NEURAL CODES
Let N be the finite number of time bins for data of the neural activity patterns on n neurons. For any S ⊆ [n], let v(S) ∈ Fn2 , where supp(v(S)) = S and
Pv(S) = #{v(S)}
N .
We would like to find the parameters θ = ( θS1 , θS2 , . . . , θS2n−1 ) , where Si ⊆ [n], Si ̸= ∅, and S2n−1 = [n], such that the following exponential function
P(x, θ) = exp ∑ S⊆[n],S ̸=∅ θSxS − ψ , where xS = ∏ i∈S xi and ψ = − log(Pv(∅)), describes a neural activity pattern from the given neural activity data. We can calculate θS using the following formula for any S ⊆ [n], where S ̸= ∅,
θS = log
( Pv(S)
Pv(∅) ∏ S′⊊S,S′ ̸=∅ exp (θS′)
) ,
ηW = ∑
S⊆[n],S⊇W
Pv(S).
Given a θ-coordinate, we can calculate the associated G(θ) = ( gθA,B ) A,B⊆[n] matrix using the following formula, gθA,B = Eθ (XAXB)− ηAηB
= ∑
W⊇A∪B
e−ψ ∏
W ′⊆W W ′ ̸=∅
eθW ′ − ηAηB ,
= ∑
W⊇A∪B
e−ψe
∑ W ′⊆W W ′ ̸=∅ θW ′
− ηAηB ,
where ψ = − log(Pv(∅)). Example 3.1. Let n = 4, A = {1, 2}, and B = {2, 4}, then
gθA,B = ∑
W⊇{1,2,4}
e−ψ ∏
W ′⊆W W ′ ̸=∅
eθW ′
= e−ψ ∏
W ′⊆{1,2,4} W ′ ̸=∅
eθW ′ + e−ψ ∏
W ′⊆{1,2,3,4} W ′ ̸=∅
eθW ′
= ( eθ{1}+θ{2}+θ{4}+θ{1,2}+θ{1,4}+θ{2,4}+θ{1,2,4}−ψ
+eθ{1}+θ{2}+θ{3}+θ{4}+θ{1,2}+θ{1,3}+θ{1,4}+θ{2,3}+θ{2,4}+θ{3,4}+θ{1,2,3}+θ{1,2,4}+θ{1,2,3,4}−ψ ) −η{1,2}η{2,4}
3.3 HYPOTHESIS TESTING FOR ALGEBRAIC AND TOPOLOGICAL FEATURES
Using the previous sections, we can now perform hypothesis testing on specific RF relationships or topological features such as holes.
Given Pv(S) for all S ⊆ [n] as in the previous subsection, we can calculate ηW , for all W ⊆ [n], where ηW is equal to E (∏ i∈W xi ) = Prob{xi = 1,∀i ∈W}, using the following formula.
ηW = ∑
S⊆[n],S⊇W
Pv(S)
Given a set of neurons A ⊆ [n], where |A| = k, we want to test whether there is a k-th order interaction between neurons in A or not. We can do this by hypothesis testing as follows.
1. Calculate θS and ηW , for all S,W ⊆ [n]. 2. Specify a coordinate for P(x; η, θ) based on A as
ζAk = ( ηAk−; θ A k ) ,
where ηAk− = (ηH)H⊆[n],|H|≤k and θ A k = (θH)H⊆[n],|H|>k .
3. Set the corresponding null hypothesis coordinate as ζ0k = ( ηAk−; θ 0 k ) ,
where ηA = 0, ηH is equal to the previous step except for H = A, and θ0k is equal to the one in the previous step.
4. Determine the corresponding G(θ) = ( gθA,B ) A,B⊆[n] matrix related to θ-coordinate using
equation 3.2. Arrange the rows and columns of G(θ) such that
G(θ) = ( Aθ Bθ BTθ Dθ ) ,
where Aθ is the submatrix of G(θ) with row and column indices from all H ⊆ [n] with |H| ≤ k and Dθ is the submatrix of G(θ) with row and column indices from all H ⊆ [n] with |H| > k.
5. Determine the corresponding G(η) = ( gηA,B ) A,B⊆[n] matrix related to η-coordinate using
the equation G(η) = G(θ)−1. We can write G(η) in the form
G(η) = ( Aη Bη BTη Dη ) ,
where Aη is the submatrix of G(η) with row and column indices from all H ⊆ [n] with |H| ≤ k and Dη is the submatrix of G(η) with row and column indices from all H ⊆ [n] with |H| > k.
6. Determine the corresponding G(ζAk ) matrix related to the mixed coordinate ζ A k with
G(ζAk ) = ( AζAK O O DζAK ) ,
where AζAK = A −1 θ and DζAK = D −1 η .
7. Calculate the test statistic as follows
λ = 2 N∑ i=1 log
( P(xi; η A k−, θ 0 k)
P(xi; ηAk−, θ A k )
)
≈ 2NẼ ( log ( P(x; ηAk−, θ 0 k)
P(x; ηAk−, θ A k )
))
≈ 2ND [ P(x; ηAk−, θ 0 k);P(x; η A k−, θ A k ) ]
≈ NgζAA(η 0 A − ηA)
≈ Ngζ A k
AA(η 0 A − ηA)
where gζ A k
AA is the entry of the G(ζ A k ) matrix.
8. Fix a level of significance α and find the value χ2α(1) (chi-square value with significance level α and degree of freedom 1) from the χ2 look-up table.
9. Compare λ and χ2 = max{χ2α(1), 1− χ2α(1)} • If λ ≥ χ2, there is a significant interaction between neurons in A (reject the null
hypothesis) • Otherwise, there is no significant interaction between neurons in A (accept the null
hypothesis)
Since G scales at 2n, we use a subset M of all neurons, where A ⊂M and |M | = 10. We pick a set A relevant to the feature we want to test the significance of, and choose random neurons (without replacement) not already in A for the remaining elements of M , repeating the test until we exhaust all neurons. We then correct for multiple comparisons and use α = 0.05 to detect whether there is a significant interaction in A.
The choice of A depends on which feature we wish to analyse. When analysing whether or not two neurons are disjoint or not in their RFs (a monomial relationship in CF (C)), we set A as those two neurons. When analysing whether the RF of i is contained within the RF of j (a mixed monomial relationship in CF (C)), we first set A as those two neurons, and then set A as i with a random set of neurons (repeating this at least 5 times, with different random sets, and correcting for multiple comparisons). For every dimension m where βm(∆,F2) > 0, when analysing whether a hole is significant, we test for all possible sets A ⊂ M ⊂ [n] which close the hole. If any test closes the hole, there is no hole, whereas if no test closes the whole, there is a hole.
4 APPLICATIONS
4.1 SPATIAL NAVIGATION IN BNNS
Using the RatInABox simulation package (George et al., 2022), we created simple 2D navigation environments with 0, 1, 2, or 3 holes in the first dimension. We used a random cover of 40 place cells modelled using Gaussians for the probability of firing and geodesic receptive field geometries. Starting at a random position, we then simulated random walks governed by Ornstein-Uhlenbeck processes for 30m, with parameters based on rat locomotion data in (Sargolini et al., 2006). We constructed a combinatorial neural code C using a window size of 10ms, allowing for up to 3,000 unique codewords. We constructed ∆(C) up to dimension 2 and calculated β1(∆,F2), with the hypothesis that β1 would be equal to the respective number of holes in the environment. Figure 1 shows an example of a single place cell and part of a simulated trajectory for an environment with β1 = 1 and a geometric realisation of ∆(C) constructed after a 30 minute random walk. Table 1 shows the number of statistically significant holes found after different durations of the trajectories for environments with different topologies. Although after 10 minutes of a random walk some holes were occasionally detected, in all cases after 20 minutes all holes in the environment were detected consistently. There were a large number of of monomials across all conditions (all simulations had > 1000) due to the covering nature of the RF arrangements. There were also a small number (all simulations had < 5) of mixed monomials (RFs found to be subsets, significantly so, of other RFs).
4.2 VISUAL CLASSIFICATION IN ANNS
We trained a multi-layer perceptron (MLP) to classify handwritten digits from the MNIST dataset (LeCun et al., 2010) (see Figure 2, top, for examples). The model consisted of an input layer with 784 neurons (the digit pixel values), followed by two hidden layers, each with 50 neurons using the rectified linear unit activation function and 20% dropout. The final output layer consisted of 10
neurons (corresponding to the 10 digit class labels) and used a softmax activation function. The data was split into 50,000 digits for training, 10,000 for validation, and 10,000 for testing, allowing for up to 10,000 unique codewords in our analysis. The network was trained over 10 epochs with a batch size of 32 samples. The optimiser was stochastic gradient descent (with learning rate 0.01 and momentum 0.5) and the criterion was the cross-entropy loss between the one-hot vector of the true class labels and the output layer’s activation for each sample. The MLP achieved > 96% accuracy after 10 epochs (Figure 2, middle).
After each epoch, test samples which the network did not see during training were fed through the network and the activity of all neurons in both hidden layers was recorded. The recorded activities for each hidden layer corresponding to each sample were then binarized about their means (calculated over all samples) to create a code C of size 10, 000× 50 for each layer, which we denote C1 for layer one and C2 for layer two.
The codes C1 and C2 showed differences in their algebraic and geometric structures across training epochs, and also differed between themselves (Table 2). In general, C1 had more overlapping RFs
and spanned a larger number of real dimensions (assuming convexity) than C2. However, during training, we find both codes lower their dimensionality and gradually spread out their RFs to cover more of the space. This is also shown by the leftward shift between epoch 1 and 10 in the histograms of the number of co-active neurons in C2 (Figure 2, bottom).
5 DISCUSSION
We have shown it is possible to analyse the intrinsic geometry and topology of combinatorial neural codes from biological and artificial networks in an efficient and probabilistic manner. With these improved methods, we can now comfortably study codes with tens and even hundreds of thousands of codewords. We have shown how these methods can be used to better understand (with some statistically surety) how the internal representations of external inputs within these networks can change through learning, experience, and network depth.
Neuroscientists have shown combinatorial neural codes can occupy low-dimensional subspaces called neural manifolds in the covariance of their neural activities (Gallego et al., 2017; Feulner & Clopath, 2021). Trajectories and regions in these subspaces can correspond to task cognition, perceptual classification, and movement (Cohen et al., 2020; Chung & Abbott, 2021). For example, Gardner et al. (2022) show the activity of populations of hundreds of grid cells within single modules of medial entorhinal cortex (a brain area partly responsible for navigation) occupy positions on a toroidal manifold. Positions on this manifold correspond to positions in the 2D space which the animal is navigating in.
These findings might lead us to believe combinatorial neural codes are intrinsically low-dimensional despite being embedded in the high-dimensional combinatorial space of neural activity. However, theoretical (Bartolo et al., 2020) and experimental (Rigotti et al., 2013) studies have shown that the dimensionality of these neural manifolds is influenced and often directly corresponds to the dimensionality of the task or learning under study. Indeed, the low-dimensional embeddings found by Gardner et al. (2022) are predicted by the two-dimensionality of the navigation (the underlying cause of the neural activity). Mathematically-optimal combinatorial neural codes and their RFs are also related to the dimensionality of the inputs those codes are attempting to represent (Wang et al., 2013). In more naturalistic and complex tasks, maintaining high-dimensional representations in the neural code may allow for increased expressibility but lower generalisability, whereas reducing to low-dimensional representations may allow for less expressibility but higher generalisability (Fusi et al., 2016; Badre et al., 2021). High-dimensional codes are often found in recordings from BNNs and are are often found when individual neurons encode for multiple input features, allowing linear read-out of a large number of complex or simple features (Fusi et al., 2016). Such neurons, for example in macaque inferotemporal cortex (Higgins et al., 2021), can also encode for very specific and independent higher-dimensional features.
This implies combinatorial neural codes can include mixtures of coding strategies which are simultaneously low- and high-dimensional. One of the key advantages of the techniques developed and applied in this study is that we can consider these different dimensionalities of coding at the same time. We don’t reduce the embedding dimensionality to perform our analysis (which would be equivalent to assuming a low-dimensional code). We also don’t try to map individual neuron responses to experimenter-known but network-unknown external, high-dimensional variables (which would be equivalent to assuming a high-dimensional code). Instead, we keep the full, original dimensionality of the data and can identify low- or high-dimensional features and response relationships at local and global levels simultaneously, all without reference to information external to the neural network. We also provide a method for testing the statistical significance of these features and relationships, again while maintaining the original high embedding dimension of the data. This allows us to avoid making any strong assumptions about dimensionality of the task, stimuli, or the corresponding neural code – instead, we let the data speak for themselves.
We do carry over some limitations from prior work, most prominently: (a) we assume joint-activity of neurons corresponds to common inputs or selectivity thereof; and (b) we binarize neural signals into ‘on’ and ‘off’ states. We suggest future work now focus on mitigating these limitations by: (a) performing causal inference tests on neural co-activations; and (b) considering polynomials over larger finite fields, e.g., F4, or extending these methods to more ‘continuous’ structures, e.g., manifolds. | 1. What is the main contribution of the paper in the field of combinatorial neural codes?
2. What are the weaknesses of the paper regarding its clarity, motivation, and problem formulation?
3. How does the reviewer assess the strengths and weaknesses of the proposed greedy algorithm?
4. What is the novelty of the paper's content compared to prior works in boolean circuit complexity?
5. How reproducible are the results of the paper's simple parametrized exponential model? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This article (at least, in the opinion of this reviewer) is entirely undecipherable. The authors begin by stating that they wish to analyze combinatorial neural codes. They then define a code to be a set of binary zero one vectors of some length n. That is a subset of all possible such vectors (which is obviously of size 2^n). This formulation puts the problem entirely in the domain of boolean circuit complexity. Which everyone knows is a very rich and deep field, where most problems have turned out to be too hard to address wth tools that humanity has at its disposal. From here on, the authors roll out a sequence of definitions (most of which are standard, such as simplicial complex etc) that are entirely unmotivated. There is no mention of what the problem is that they wish to tackle. If it is the canonical form, what is the intuition behind it, and why would knowing the canonical form help? Since the problem formulation falls in the lap of boolean circuit complexity, one can ask: what is the smallest boolean circuit that can generate just this subset of sequences from all possible 2^m inputs where m<n. Which we know is a very difficult problem. The authors then go on to describe a greedy algorithm that they claim improves the previous algorithm that is exponential to something that is quadratic. This would be a big deal if this reviewer could first understand what the problem formulation was. Finally they put a very simple parametrized exponential model that can approximate the code.
Strengths And Weaknesses
Strength: This reviewer found the paper almost unreadable, and could not identify any strength Weakness: The paper does not explain what is the problem they wish to solve. The sequence of unmotivated definitions are mostly standard and the authors do not connect it to their problem. Finally the exponential model is way to simple to model a subset of boolean vectors.
Clarity, Quality, Novelty And Reproducibility
The exposition is very very unclear. There are no language issues but the authors need to completely overhaul the paper if they hope to communicate their ideas to the field. |
ICLR | Title
Federated Learning with Partial Model Personalization
Abstract
We propose and analyze a general framework of federated learning with partial model personalization. Compared with full model personalization, partial model personalization relies on domain knowledge to select a small portion of the model to personalize, thus imposing a much smaller on-device memory footprint. We propose two federated optimization algorithms for training partially personalized models, where the shared and personal parameters are updated either simultaneously or alternately on each device, but only the shared parameters are communicated and aggregated at the server. We give convergence analyses of both algorithms for minimizing smooth nonconvex functions, providing theoretical support of them for training deep learning models. Our experiments on real-world image and text datasets demonstrate that (a) partial model personalization can obtain most of the benefit of full model personalization with a small fraction of personalized parameters, and, (b) the alternating update algorithm often outperforms the simultaneous update algorithm.
1 INTRODUCTION
Federated Learning (McMahan et al., 2017) has emerged as a powerful paradigm for distributed and privacy-preserving machine learning over a large number of edge devices (see Kairouz et al., 2021, and references therein). We consider a typical setting of Federated Learning (FL) with n devices (also called clients), where each device i has a training dataset of Ni samples zi,1, · · · , zi,Ni . Let w ∈ Rd represent the parameters of a (supervised) learning model and fi(w, zi,j) be the loss of the model on the training example zi,j . Then the loss function associated with device i is Fi(w) = (1/Ni) ∑Ni j=1 fi(w, zi,j). A common objective of FL is to find model parameters that minimize the weighted average loss across all devices (without transferring the datasets):
minimize w n∑ i=1 αiFi(w), (1)
where weights αi are nonnegative and satisfy ∑n i=1 αi = 1. A common practice is to choose the
weights as αi = Ni/N where N = ∑n k=1Nk, which corresponds to minimizing the unweighted
average loss across all samples from the n devices: (1/N) ∑n i=1 ∑Ni j=1 fi(w, zi,j).
The main motivation for minimizing the average loss over all devices is to leverage their collective statistical power for better generalization, because the amount of data on each device can be very limited. This is especially important for training modern deep learning models with large number of parameters. However, this argument assumes that the datasets from different devices are sampled from the same, or at least very similar, distributions. Given the diverse characteristics of the users and increasing trend of personalized on-device services, such an i.i.d. assumption may not hold in practice. Thus, the one-model-fits-all formulation in (1) can be less effective and even undesirable.
Several approaches have been proposed for personalized FL, including ones based on multi-task learning (Smith et al., 2017), meta learning (Fallah et al., 2020), and proximal methods (Dinh et al., 2020; Li et al., 2021). A simple formulation that captures their main idea is
minimize w0,{wi}ni=1 n∑ i=1 αi ( Fi(wi) + λi 2 ‖wi − w0‖2 ) , (2)
where wi for i = 1, . . . , n are personalized model parameters at the devices, w0 is a reference model maintained by the server, and the λi’s are regularization weights that control the extent of personalization. A major disadvantage of the formulation (2), which we call full model personalization, is that it requires twice the memory footprint of the model, wi and w0 at each device, which severely limits the size of trainable models. On the other hand, the flexibility of full model personalization can be unnecessary. Modern deep learning models are composed of many simple functional units and are typically organized into layers or a more general interconnected architecture. Personalizing the “right” components, selected with domain knowledge, may result in a substantial benefit with only a small increase in memory footprint. In addition, partial model personalization can be less susceptible to “catastrophic forgetting” (McCloskey & Cohen, 1989), where a large model finetuned on a small local dataset forgets the original (non-personalized) task, leading to a degradation of test performance.
We propose a framework for FL with partial model personalization. Specifically, we partition the model parameters into two groups: the shared parameters u ∈ Rd0 and the personal parameters vi ∈ Rdi for i = 1, . . . , n. The full model on device i is denoted as wi = (u, vi), and the local loss function is Fi(u, vi) = (1/Ni) ∑Ni i=1 fi ( (u, vi), zi,j ) . Our goal is to solve the optimization problem
minimize u, {vi}ni=1 n∑ i=1 αiFi(u, vi). (3)
Notice that the dimensions of vi can be different across the devices, allowing the personal components of the model to have different number of parameters or even different architecture.
We investigate two FL algorithms for solving problem (3): FedSim, a simultaneous update algorithm and FedAlt, an alternating update algorithm. Both algorithms follow the standard FL protocol. During each round, the server randomly selects a subset of the devices for update and broadcasts the current global version of the shared parameters to devices in the subset. Each selected device then performs one or more steps of (stochastic) gradient descent to update both the shared parameters and the personal parameters, and sends the updated shared parameters to the server for aggregation. The updated personal parameters are kept local at the device to serve as the initial states when the device is selected for another update. In FedSim, the shared and personal parameters are updated simultaneously during each local iteration. In FedAlt, the devices first update the personal parameters with the received shared parameters fixed and then update the shared parameters with the new personal parameters fixed. We provide convergence analysis and empirical evaluation of both methods.
The main contributions of this paper are summarized as follows:
• We propose a general framework of FL with partial model personalization, which relies on domain knowledge to select a small portion of the model to personalize, thus imposing a much smaller memory footprint on the devices than full model personalization. This framework unifies existing work on personalized FL and allows arbitrary partitioning of deep learning models.
• We provide convergence guarantees for the FedSim and FedAlt methods in the general (smooth) nonconvex setting. While both methods have appeared in the literature previously, they are either used without convergence analysis or with results on limited settings (assuming convexity or full participation) Our analysis provides theoretical support for the general nonconvex setting with partial participation. The analysis of FedAlt with partial participation is especially challenging and we develop a novel technique of virtual full participation to overcome the difficulties.
• We conduct extensive experiments on image classification and text prediction tasks, exploring different model personalization strategies for each task, and comparing with several strong baselines. Our results demonstrate that partial model personalization can obtain most of the benefit of full model personalization with a small fraction of personalized parameters, and FedAlt often outperforms FedSim.
• Our experiments also reveal that personalization (full or partial) may lead to worse performance for some devices, despite improving the average. Typical forms of regularization such as weight decay and dropout do not mitigate this issue. This phenomenon has been overlooked in previous work and calls for future research to improve both performance and fairness.
Related work. Specific forms of partial model personalization have been considered in previous works. Liang et al. (2019) propose to personalize the input layers to learn a personalized representation per-device (Figure 1a), while Arivazhagan et al. (2019) and Collins et al. (2021) propose to personalize the output layer while learning a shared representation with the input layers (Figure 1b). Both FedSim and FedAlt have appeared in the literature before, but the scope of their convergence analysis is limited. Specifically, Liang et al. (2019), Arivazhagan et al. (2019) and Hanzely et al. (2021) use FedSim, while Collins et al. (2021) and Singhal et al. (2021) proposed variants of FedAlt. Notably, Hanzely et al. (2021) establish convergence of FedSim with full device participation in the convex and non-convex cases, while Collins et al. (2021) prove the linear convergence of FedAlt for a two-layer linear network where Fi(·, vi) and Fi(u, ·) are both convex for fixed vi and u respectively. We analyze both FedSim and FedAlt in the general nonconvex case with partial device participation, hence addressing a more general and practical setting.
While we primarily consider the problem (3) in the context of partial model personalization, it can serve as a general formulation that covers many other problems. Hanzely et al. (2021) demonstrate that various full model personalization formulations based on regularization (Dinh et al., 2020; Li et al., 2021), including (2), as well as interpolation (Deng et al., 2020; Mansour et al., 2020) are special cases of this problem. The rates of convergence we prove in §3 are competitive with or better than those in previous works for full model personalization methods in the non-convex case.
2 PARTIALLY PERSONALIZED MODELS
Modern deep learning models all have a multi-layer architecture. While a complete understanding of why they work so well is still out of reach, a general insight is that the lower layers (close to the input) are mostly responsible for feature extraction and the upper layers (close to the output) focus on complex pattern recognition. Depending on the application scenarios and domain knowledge, we may personalize either the input layer(s) or the output layer(s) of the model; see Figure 1.
In Figure 1c, the input layers are split horizontally into two parts, one shared and the other personal. They process different chunks of the input vector and their outputs are concatenated before feeding
Algorithm 1 Federated Learning with Partial Model Personalization (FedSim / FedAlt)
Input: initial states u(0), {v(0)i }ni=1, number of rounds T , number of devices per round m 1: for t = 0, 1, · · · , T − 1 do 2: server randomly samples m devices as S(t) ⊂ {1, . . . , n} 3: server broadcasts u(t) to each device in S(t) 4: for each device i ∈ S(t) in parallel, do 5: ( u
(t+1) i , v (t+1) i
) = LocalSim / LocalAlt ( u(t), v
(t) i
) . v
(t+1) i = v (t) i if i /∈ S(t)
6: send u(t+1)i back to server 7: server updates u(t+1) = 1m ∑ i∈S(t) u (t+1) i
to the upper layers of the model. As demonstrated in (Bui et al., 2019), this partitioning can help protect user-specific private features (input 2 in Figure 1c) as the corresponding feature embedding (through vi) are personalized and kept local at the device. Similar architectures have also been proposed in context-dependent language models (e.g., Mikolov & Zweig, 2012).
A more structured partitioning is illustrated in Figure 2a, where a typical transformer layer (Vaswani et al., 2017) is augmented with two adapters. This architecture is proposed by Houlsby et al. (2019) for finetuning large language models. Similar residual adapter modules are proposed by Rebuffi et al. (2017) for image classification models in the context of multi-task learning. In the context of FL, we treat the adapter parameters as personal and the rest of the model parameters as shared.
Figure 2b shows a generalized additive model, where the outputs of two separate models, one shared and the other personalized, are fused to generate a prediction. Suppose the shared model is h(u, ·) and the personal model is hi(vi, ·). For regression tasks with samples zi,j = (xi,j , yi,j), where xi,j is the input and yi,j ∈ Rp is the output, we let Fi(u, vi) = (1/Ni) ∑Ni j=1 fi ( (u, vi), zi,j ) with
fi ( (u, vi), zi,j ) = ‖yi,j − h(u, xi,j)− hi(vi, xi,j)‖2 .
In this special case, the personal model fits the residual of the shared model and vice-versa (Agarwal et al., 2020). For classification tasks, h(u, ·) and hi(vi, ·) produce probability distributions over multiple classes. We can use the cross-entropy loss between yi,j and a convex combination of the two model outputs: θh(u, xi,j) + (1− θ)hi(vi, xi,j), where θ ∈ (0, 1) is a learnable parameter. Finally, we can cast the formulation (2) of full model personalization as a special case of (3) by letting
u← w0, vi ← wi, Fi(u, vi)← Fi(vi) + (λi/2)‖vi − u‖2. Many other formulations of full personalization can be reduced to (3); see Hanzely et al. (2021).
3 ALGORITHMS AND CONVERGENCE ANALYSIS
In this section, we present and analyze two FL algorithms for solving problem (3). To simplify presentation, we denote V = (v1, . . . , vn) ∈ Rd1+...+dn and focus on the case of αi = 1/n, i.e.,
minimizeu, V F (u, V ) := 1 n ∑n i=1 Fi(u, vi). (4)
This is equivalent to (3) if we scale Fi by nαi, thus does not lose generality. Moreover, we consider more general local functions Fi(u, vi) = Ez∼Di [fi((u, vi), z)], where Di is the local distribution. The FedSim and FedAlt algorithms share a common outer-loop description given in Algorithm 1. They differ only in the local update procedures LocalSim and LocalAlt, which are given in Algorithm 2 and Algorithm 3 respectively. In the two local update procedures, ∇̃u and ∇̃v represent stochastic gradients with respect to w and vi respectively. In LocalSim (Algorithm 2), the personal variables vi and local version of the shared parameters ui are updated simultaneously, with their (stochastic) partial gradients evaluated at the same point. In LocalAlt (Algorithm 3), the personal parameters are updated first with the received shared parameters fixed, then the shared parameters are updated with the new personal parameters fixed. They are analogous to the classical Jacobi update and Gauss-Seidel update in numerical linear algebra (e.g., Demmel, 1997, §6.5).
In order to analyze the convergence of the two algorithms, we make the following assumptions.
Algorithm 2 LocalSim ( u, vi ) Input: number of steps τ , step sizes γv and γu
1: initialize vi,0 = vi 2: initialize ui,0 = u 3: for k = 0, 1, · · · , τ − 1 do 4: vi,k+1 = vi,k − γv∇̃vFi ( ui,k, vi,k
) 5: ui,k+1 = ui,k − γu∇̃uFi ( ui,k, vi,k
) 6: update v+i = vi,τ 7: update u+i = ui,τ 8: return ( u+i , v + i )
Algorithm 3 LocalAlt ( u, vi ) Input: number of steps τv, τu, step sizes γv, γu
1: initialize vi,0 = vi 2: for k = 0, 1, · · · , τv−1 do 3: vi,k+1 = vi,k − γv∇̃vFi ( u, vi,k ) 4: update v+i = vi,τv and initialize ui,0 = u 5: for k = 0, 1, · · · , τu−1 do 6: ui,k+1 = ui,k − γu∇̃uFi ( ui,k, v + i
) 7: update u+i = ui,τu 8: return ( u+i , v + i
) Assumption 1 (Smoothness). The function Fi is continuously differentiable for each i = 1, . . . , n, and there exist constants Lu, Lv , Luv and Lvu such that for each i = 1, . . . , n, it holds that
• ∇uFi(u, vi) is Lu-Lipschitz with respect to u and Luv-Lipschitz with respect to vi; • ∇vFi(u, vi) is Lv-Lipschitz with respect to vi and Lvu-Lipschitz with respect to u.
Due to the definition of F (u, V ) in (4), it is easy to verify that∇uF (u, V ) has Lipschitz constant Lu with respect to u, Luv/ √ n with respect to V , and Luv/n with respect to any vi. We also define
χ := max{Luv, Lvu} /√
LuLv, (5) which measures the relative cross-sensitivity of ∇uFi with respect to vi and ∇vFi with respect to u. Assumption 2 (Bounded Variance). The stochastic gradients in Algorithm 2 and Algorithm 3 are unbiased and have bounded variance. That is, for all u and vi,
E [ ∇̃uFi(u, vi) ] = ∇uFi(u, vi), E [ ∇̃vFi(u, vi) ] = ∇vFi(u, vi) .
Furthermore, there exist constants σu and σv such that E [∥∥∇̃uFi(u, vi)−∇uFi(u, vi)∥∥2] ≤ σ2u , E[∥∥∇̃vFi(u, vi)−∇vFi(u, vi)∥∥2] ≤ σ2v .
We can view ∇uFi(u, vi), when i is randomly sampled from {1, . . . , n}, as a stochastic partial gradient of F (u, V ) with respect to u. The following assumption imposes a variance bound. Assumption 3 (Partial Gradient Diversity). There exist δ ≥ 0 and ρ ≥ 0 such that for all u and V ,
1 n ∑n i=1 ∥∥∇uFi(u, vi)−∇uF (u, V )∥∥2 ≤ δ2 + ρ2∥∥∇uF (u, V )∥∥2 . With ρ = 0, this assumption is similar to a constant variance bound on the stochastic gradient ∇uFi(u, vi); with ρ > 0, it allows the variance to grow with the norm of the full gradient.
Throughout this paper, we assume F is bounded below by F ? and denote ∆F0 = F ( u(0), V (0) ) −F ?. Further, we use shorthand V (t) = (v(t)1 , . . . , v (t) n ) and
∆ (t) u = ∥∥∇uF (u(t), V (t))∥∥2 , and ∆(t)v = 1n∑ni=1∥∥∇vFi(u(t), v(t)i )∥∥2 . For smooth and nonconvex loss functions Fi, we obtain convergence in expectation to a stationary point of F if the expected values of these two sequences converge to zero.
We first present our main result for FedSim (Algorithm 1 with LocalSim), proved in Appendix A.2. Theorem 1 (Convergence of FedSim). Suppose Assumptions 1, 2 and 3 hold and the learning rates in FedSim are chosen as γu = η/(Luτ) and γv = η/(Lvτ) with
η ≤ min {
1 12(1+χ2)(1+ρ2) ,
√ m/n
196(1−τ−1)(1+χ2)(1+ρ2)
} .
Then, ignoring absolute constants, we have
1 T ∑T−1 t=0 ( 1 Lu E [ ∆ (t) u ] + mnLv E [ ∆ (t) v ]) ≤ ∆F0ηT + η(1 + χ 2) ( σ2u+δ 2(1−mn ) mLu + mσ2v nLv ) + η2(1− τ−1)(1 + χ2) ( σ2u+δ 2
Lu + σ2v Lv
) . (6)
The left-hand side of (6) is the average over time of a weighted sum of E [ ∆ (t) u ] and E [ ∆ (t) v ] . The right-hand side contains three terms of order O(1/(ηT )), O(η) and O(η2) respectively. We can minimize the right-hand side by optimizing over η. By considering special cases such as σ2u = σ 2 v = 0 and m = n, some terms on the right-hand side disappear and we can obtain improved rates. Table 1 shows the results in several different regimes along with the optimal choices of η.
Challenge in Analyzing FedAlt. We now turn to FedAlt. Note that the personal parameters are updated only for the m selected devices in S(t) in each round t. Specifically,
v (t+1) i =
{ v
(t) i − γv ∑τv k=0 ∇̃vFi ( u(t), v (t) i,k ) if i ∈ S(t), v (t) i if i /∈ S(t).
Consequently, the vector V (t+1) of personal parameters depends on the random variable S(t). This makes it challenging to analyze the u-update steps of FedAlt because they are performed after V (t+1) is generated (as opposed to simultaneously in FedSim). When we take expectations with respect to the sampling of S(t) in analyzing the u-updates, V (t+1) becomes a dependent random variable, which prevents standard proof techniques from going through (see details in Appendix A.3).
We develop a novel technique called virtual full participation to overcome this challenge. Specifically, we define a virtual vector Ṽ (t+1), which is the result if every device were to perform local v-updates. It is independent of the sampling of S(t) and we can derive a convergence rate for related quantities. We carefully translate this rate from the virtual Ṽ (t+1) to the actual V (t) to get the following result. Theorem 2 (Convergence of FedAlt). Suppose Assumptions 1, 2 and 3 hold and the learning rates in FedAlt are chosen as γu = η/(Luτu) and γv = η/(Lvτv), with
η ≤ min {
1 24(1+ρ2) , m 128χ2(n−m) , √ m χ2n } .
Then, ignoring absolute constants, we have
1 T ∑T−1 t=0 ( 1 Lu E [ ∆ (t) u ] + mnLv E [ ∆ (t) v ]) ≤ ∆F0
ηT + η
( σ2u+δ
2(1−mn ) mLu + σ2v Lv m+χ2(n−m) n ) + η2 ( σ2u+δ 2
Lu (1− τ−1u ) + σ2vm Lvn (1− τ−1v ) + χ2σ2v Lv
) .
The proof of Theorem 2 is given in Appendix A.3. Similar to the results for FedSim, we can choose η to minimize the above upper bound to obtain the best convergence rate, as summarized in Table 1.
Comparing FedSim and FedAlt. Table 1 shows that both FedSim and FedAlt exhibit the standard O(1/ √ T ) rate in the general case. Comparing the constants in their rates, we identify two regimes in terms of problem parameters. The regime where FedAlt dominates FedSim is characterized by σ2v Lv ( 1− 2mn ) < σ2u+δ 2(1−m/n) mLu . A practically relevant scenario where this is true is σ2v ≈ 0 and σ2u ≈ 0 from using large or full batch on a small number of samples per device. Here, the rate of FedAlt is better than FedSim by a factor of (1 + χ2), indicating that the rate of FedAlt is less affected by the coupling between the personal and shared parameters. Our experiments in §4 corroborate the practical relevance of this regime.
The rates from Table 1 also apply for full personalization schemes without convergence guarantees in the nonconvex case (Agarwal et al., 2020; Mansour et al., 2020; Li et al., 2021). Our rates are better than those of (Dinh et al., 2020) for their pFedMe objective.
4 EXPERIMENTS
In this section, we experimentally compare different model personalization schemes using FedAlt and FedSim as well as no model personalization. Details about the experiments, hyperparameters and additional results are provided in the appendices. The code to reproduce the experimental results will be publicly released.
Datasets, Tasks and Models. We consider three learning tasks; they are summarized in Table 2.
(a) Next-Word Prediction: We use the StackOverflow dataset, where each device corresponds to the questions and answers of one user on stackoverflow.com. This task is representative of mobile keyboard predictions. We use a 4-layer transformer model (Vaswani et al., 2017).
(b) Visual Landmark Recognition: We use the GLDv2 dataset (Weyand et al., 2020; Hsu et al., 2020), a large-scale dataset with real images of global landmarks. Each device corresponds to a Wikipedia contributor who uploaded images. This task resembles a scenario where smartphone users capture images of landmarks while traveling. We use a ResNet-18 (He et al., 2016) model with group norm instead of batch norm (Hsieh et al., 2020) and images are reshaped to 224× 224.
(c) Character Recognition: We use the EMNIST dataset (Cohen et al., 2017), where the input is a 28 × 28 grayscale image of a handwritten character and the output is its label (0-9, a-z, A-Z). Each device corresponds to a writer of the character. We use a ResNet-18 model, with input and output layers modified to accommodate the smaller image size and number of classes.
All models are trained with the cross entropy loss and evaluated with top-1 accuracy of classification.
Model Partitioning for Partial Personalization. We consider three partitioning schemes.
(a) Input layer personalization: This architecture learns a personalized representation per-device by personalizing the input layer, while the rest of the model is shared (Figure 1a). For the transformer, we use the first transformer layer in place of the embedding layer.
(b) Output layer personalization: This architecture learns a shared representation but personalizes the prediction layer (Figure 1b). For the transformer model, we use the last transformer layer instead of the output layer.
(c) Adapter personalization: In this architecture, each device adds lightweight personalized adapter modules between specific layers of a shared model (Figure 2a). We use the transformer adapters of Houlsby et al. (2019) and for ResNet-18, the residual adapters of Rebuffi et al. (2017).
Algorithms and Experimental Pipeline. For full model personalization, we consider three baselines: (i) Finetune, where each device finetunes (using SGD locally) its personal full model starting from a learned common model, (ii) Ditto (Li et al., 2021), which is finetuning with `2 regularization, and, (iii) pFedMe (Dinh et al., 2020) which minimizes the objective (2). All methods, including FedSim, FedAlt and the baselines are initialized with a global model trained with FedAvg.
4.1 EXPERIMENTAL RESULTS
Partial personalization nearly matches full personalization and can sometimes outperform it. Table 3 shows the average test accuracy across all devices of different FL algorithms. We see that on the StackOverflow dataset, output layer personalization (25.05%) makes up nearly 90% of the gap between the non-personalized baseline (23.82%) and full personalization (25.21%). On EMNIST, adapter personalization exactly matches full personalization. Most surprisingly, on GLDv2, adapter personalization outperforms full personalization by 3.5pp (percentage points).
This success of adapter personalization can be explained partly by the nature of GLDv2. On average, the training data on each device contains 25 classes out of a possible 2028 while the testing data contains 10 classes not seen in its own training data. These unseen classes account for nearly 23% of all testing data. Personalizing the full model is susceptible to “forgetting” the original task (Kirkpatrick et al., 2017), making it harder to get these unseen classes right. Such catastrophic forgetting is worse when finetuning on a very small local dataset, as we often have in FL. On the other hand, personalizing the adapters does not suffer as much from this issue (Rebuffi et al., 2017).
Partial personalization only requires a fraction of the parameters to be personalized. Figure 3 shows that the number of personalized parameters required to compete with full model personalization is rather small. On StackOverflow, personalizing 1.2% of the parameters with adapters captures 72% of the accuracy boost from personalizing all 5.7M parameters; this can be improved to nearly 90% by personalizing 14% of the parameters (output layer). Likewise, we match full personalization on EMNIST and exceed it on GLDv2 with adapters, personalizing 11.5-12.5% of parameters.
The best personalized architecture is model and task dependent. Table 3 shows that personalizing the final transformer layer (denoted as “Output Layer”) achieves the best performance for StackOverflow, while the residual adapter achieves the best performance for GLDv2 and EMNIST. This shows that the approach of personalizing a fixed model part, as in several past works, is suboptimal. Our framework allows for the use of domain knowledge to determine customized personalization.
Finetuning is competitive with other full personalization methods. Full finetuning matches the performance of pFedMe and Ditto on StackOverflow and EMNIST. On GLDv2, however, pFedMe outperforms finetuning by 0.07pp, but is still 3.5pp worse than adapter personalization.
FedAlt outperforms FedSim for partial personalization. If the optimization problem (3) were convex, we would expect similar performance from FedAlt and FedSim. However, with nonconvex optimization problems such as the ones considered here, the choice of the optimization algorithm often affects the quality of the solution found. We see from Table 4 that FedAlt is almost always better than FedSim by a small margin, e.g., 0.08pp for StackOverflow/Adapter and 0.3pp for GLDv2/Input Layer. FedSim in turn yields a higher accuracy than simply finetuning the personalized part of the model, by a large margin, e.g., 0.12pp for StackOverflow/Output Layer and 2.55pp for GLDv2/Adapter.
4.2 EFFECTS OF PERSONALIZATION ON PER-DEVICE GENERALIZATION
Personalization hurts the test accuracy on some devices. Figure 4 shows the change in training and test accuracy of each device, compared with a non-personalized model trained by FedAvg. We see that personalization leads to an improvement in training accuracy across all devices, but a reduction in test accuracy on some of the devices over the non-personalized baseline. In particular, devices whose testing performance is hurt by personalization are mostly on the left side of the plot, meaning that they have relatively small number of training samples. On the other hand, many devices with the most improved test accuracy also appear on the left side, signaling the benefit of personalization. Therefore, there is a large variation of results for devices with few samples.
Additional experiments (see Appendix C) show that using `2 regularization, as in (2), or weight decay does not mitigate this issue. In particular, increasing regularization strength (less personalization) can reduce the spread of per-device accuracy, but only leads to a worse average accuracy that is close to using a common model. Other simple strategies such as dropout also do not fix this issue.
An ideal personalized method would boost performance on most of the devices without causing a reduction in (test) accuracy on any device. Realizing this goal calls for a sound statistical analysis for personalized FL and may require sophisticated methods for local performance diagnosis and more structured regularization. These are very promising directions for future research.
5 DISCUSSION
In addition to a much smaller memory footprint than full model personalization and being less susceptible to catastrophic forgetting, partial model personalization has other advantages. For example, it reduces the amount communication between the server and the devices because only the shared parameters are transmitted. While the communication saving may not be significant (especially when the personal parameters are only a small fraction of the full model), communicating only the shared parameters may have significant implications for privacy. Intuitively, it can be harder to infer private information from partial model information. This is especially the case if the more sensitive features of the data are processed through personal components of the model that are kept local at the devices. For example, we speculate that less noise needs to be added to the communicated parameters in order to satisfy differential privacy requirements (Abadi et al., 2016). This is a very promising direction for future research.
REPRODUCIBILITY STATEMENT
For theoretical results, we state and discuss the assumptions in Appendix A. The full proofs of all theoretical statements are also given there.
For our numerical results, we take multiple steps for reproducibility. First, we run each numerical experiment for five random seeds, and report both the mean and standard deviation over these runs. Second, we only use publicly available datasets and report the preprocessing at length in Appendix B. Third, we give the full list of hyperparameters used in our experiments in Table 8 in Appendix B. Finally, we will publicly release the code to reproduce the our experimental results.
ETHICS STATEMENT
The proposed framework for partial model personalization is immediately applicable for a range of practical federated learning applications in edge devices such as text prediction and speech recognition. One of key considerations of federated learning is privacy. Partial model personalization maintains the all the privacy benefits of current non-personalized federated learning systems. Indeed, our approach is compatible with techniques to enhance privacy such as differential privacy and secure aggregation. We also speculate that partial personalization has the potential for further reducing the privacy footprint — an investigation of this subject is beyond the scope of this work and is an interesting direction for future work.
On the flip side, we also observed in experiments that personalization (both full or partial) leads to a reduction in test performance on some of the devices. This has important implications for fairness, and calls for further research into the statistical aspects of personalization, performance diagnostics as well as more nuanced definitions of fairness in federated learning.
Appendix
Table of Contents
A Convergence Analysis: Full Proofs 14
A.1 Review of Setup and Assumptions . . . . . . . . . . . . . . . . . . . . . . . . 14 A.2 Convergence Analysis of FedSim . . . . . . . . . . . . . . . . . . . . . . . . . 15 A.3 Convergence Analysis of FedAlt . . . . . . . . . . . . . . . . . . . . . . . . . 22 A.4 Technical Lemmas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
B Experiments: Detailed Setup and Hyperparameters 31
B.1 Datasets, Tasks and Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 B.2 Experimental Pipeline and Baselines . . . . . . . . . . . . . . . . . . . . . . . 34 B.3 Hyperparameters and Evaluation Details . . . . . . . . . . . . . . . . . . . . . 35
C Experiments: Additional Results 36
C.1 Ablation: Final Finetuning for FedAlt and FedSim . . . . . . . . . . . . . . . . 36 C.2 Effect of Personalization on Per-Device Generalization . . . . . . . . . . . . . . 37 C.3 Partial Personalization for Stateless Devices . . . . . . . . . . . . . . . . . . . 38
A CONVERGENCE ANALYSIS: FULL PROOFS
We give the full convergence proofs here. The outline of this section is:
• §A.1: Review of setup and assumptions; • §A.2: Convergence analysis of FedSim and the full proof of Theorem 1; • §A.3: Convergence analysis of FedAlt and the full proof of Theorem 2; • §A.4: Technical lemmas used in the analysis.
A.1 REVIEW OF SETUP AND ASSUMPTIONS
We consider a federated learning system with n devices. Let the loss function on device i be Fi(u, vi), where u ∈ Rd0 denotes the shared parameters across all devices and vi ∈ Rdi denotes the personal parameters at device i. We aim to minimize the function
F (u, V ) := 1
n n∑ i=1 Fi(u, vi) , (7)
where V = (v1, · · · , vn) is a concatenation of all the personalized parameters. This is a special case of (3) with the equal per-device weights, i.e., αi = 1/n. Recall that we assume that F is bounded from below by F ?.
For convenience, we reiterate Assumptions 1, 2 and 3 from the main-paper as Assumptions 1′, 2′ and 3′ below respectively, with some additional comments and discussion. Assumption 1′ (Smoothness). For each device i = 1, . . . , n, the objective Fi is smooth, i.e., it is continuously differentiable and,
(a) u 7→ ∇uFi(u, vi) is Lu-Lipschitz for all vi, (b) vi 7→ ∇vFi(u, vi) is Lv-Lipschitz for all u, (c) vi 7→ ∇uFi(u, vi) is Luv-Lipschitz for all u, and, (d) u 7→ ∇vFi(u, vi) is Lvu-Lipschitz for all vi. Further, we assume for some χ > 0 that
max{Luv, Lvu} ≤ χ √ LuLv .
The smoothness assumption is a standard one. We can assume without loss of generality that the cross-Lipschitz coefficients Luv, Lvu are equal. Indeed, if Fi is twice continuously differentiable, we can show that Luv, Lvu are both equal to the operator norm ‖∇2uvFi(u, vi)‖op of the mixed second derivative matrix. Further, χ denotes the extent to which u impacts the gradient of vi and vice-versa.
Our next assumption is about the variance of the stochastic gradients, and is standard in literature. Compared to the main paper, we adopt a more precise notation about stochastic gradients. Assumption 2′ (Bounded Variance). Let Di denote a probability distribution over the data space Z on device i. There exist functions Gi,u and Gi,v which are unbiased estimates of ∇uFi and ∇vFi respectively. That is, for all u, vi:
Ez∼Di [Gi,u(u, v, z)] = ∇uFi(u, vi), and Ez∼Di [Gi,v(u, v, z)] = ∇vFi(u, vi) .
Furthermore, the variance of these estimators is at most σ2u and σ 2 v respectively. That is,
Ez∼Di‖Gi,u(u, v, z)−∇uFi(u, vi)‖ 2 ≤ σ2u ,
Ez∼Di‖Gi,v(u, v, z)−∇vFi(u, vi)‖ 2 ≤ σ2v .
In practice, one usually has Gi,u(u, vi, z) = ∇ufi((u, vi), z), which is the gradient of the loss on datapoint z ∼ Di under the model (u, vi), and similarly for Gi,v . Finally, we make a gradient diversity assumption. Assumption 3′ (Partial Gradient Diversity). There exist δ ≥ 0 and ρ ≥ 0 such that for all u and V ,
1
n n∑ i=1 ‖∇uFi(u, vi)−∇uF (u, V )‖2 ≤ δ2 + ρ2‖∇uF (u, V )‖2 . (8)
Algorithm 4 FedSim: Simultaneous update of shared and personal parameters
Input: Initial iterates u(0), V (0), Number of communication rounds T , Number of devices per round m, Number of local updates τ , Local step sizes γu, γv .
1: for t = 0, 1, · · · , T − 1 do 2: Sample m devices from [n] without replacement in S(t) 3: for each selected device i ∈ S(t) in parallel do 4: Initialize v(t)i,0 = v (t) i and u (t) i,0 = u (t) 5: for k = 0, · · · , τ − 1 do . Update all parameters jointly 6: Sample data z(t)i,k ∼ Di 7: v(t)i,k+1 = v (t) i,k − γvGi,v(u (t) i,k, v (t) i,k, z (t) i,k) 8: u(t)i,k+1 = u (t) i,k − γuGi,u(u (t) i,k, v (t) i,k, z (t) i,k) 9: Update v(t+1)i = v (t) i,τ and u (t+1) i = u (t) i,τ
10: Update u(t+1) = ∑ i∈S(t) αiu (t+1) i∑
i∈S(t) αi at the server with secure aggregation
11: return u(T ), v(T )1 , · · · , v (T ) n
This assumption is analogous to the bounded variance assumption (Assumption 2′), but with the stochasticity coming from the sampling of devices. It characterizes how much local steps on one device help or hurt convergence globally. Similar gradient diversity assumptions are often used for analyzing non-personalized federated learning (Koloskova et al., 2020; Karimireddy et al., 2020). Finally, it suffices for the partial gradient diversity assumption to only hold at the iterates (u(t), V (t)) generated by either FedSim or FedAlt.
A.2 CONVERGENCE ANALYSIS OF FEDSIM
We give the full form of FedSim in Algorithm 4 for the general case of unequal αi’s but focus on αi = 1/n for the analysis. In order to simplify presentation, we denote V (t) = (v (t) 1 , . . . , v (t) n ) and define the following shorthand for gradient terms
∆(t)u = ∥∥∥∇uF (u(t), V (t))∥∥∥2 , and ∆(t)v = 1n n∑ i=1 ∥∥∥∇vFi (u(t), v(t)i )∥∥∥2 . For convenience, we restate Theorem 1 from the main paper. Theorem 1 (Convergence of FedSim). Suppose Assumptions 1, 2 and 3 hold and the learning rates in FedSim are chosen as γu = η/(Luτ) and γv = η/(Lvτ) with
η ≤ min {
1 12(1+χ2)(1+ρ2) ,
√ m/n
196(1−τ−1)(1+χ2)(1+ρ2)
} .
Then, ignoring absolute constants, we have
1 T ∑T−1 t=0 ( 1 Lu E [ ∆ (t) u ] + mnLv E [ ∆ (t) v ]) ≤ ∆F0ηT + η(1 + χ 2) ( σ2u+δ 2(1−mn ) mLu + mσ2v nLv ) + η2(1− τ−1)(1 + χ2) ( σ2u+δ 2
Lu + σ2v Lv
) . (6)
Before proving the theorem, we give the following corollary with optimized learning rates. Corollary 3. Consider the setting of Theorem 1 and let ε > 0 be given. Suppose we set the learning rates γu = η/(τLu) and γv = η/(τLv), where (ignoring absolute constants),
η = ε(
δ2 Lu ( 1− mn ) + σ2u Lu + σ2vm Lun ) (1 + χ2)
∧ ε( δ2
Lu ∨ σ 2 u Lu ∨ σ 2 v Lv
) (1− τ−1)(1 + χ2)
1/2
∧ 1 (1 + χ2)(1 + ρ2) ∧( m/n (1− τ−1)(1 + ρ2)(1 + χ2) )1/2 .
We have,
1
T T−1∑ t=0
( 1 Lu E ∥∥∥∇uF (u(t), V (t))∥∥∥2 + m Lvn2 n∑ i=1 E ∥∥∥∇vFi (u(t), v(t)i )∥∥∥2 ) ≤ ε
after T communication rounds, where, ignoring absolute constants,
T ≤ ∆F0(1 + χ 2)
ε2
( σ2u + δ 2 ( 1− mn ) mLu + mσ2v nLv )
+ ∆F0
√ (1− τ−1)(1 + χ2)
ε3/2
( σu + δ√ Lu + σv√ Lv ) +
∆F0 ε (1 + χ2)(1 + ρ2)
( 1 + √ (1− τ−1)n
m
) .
Proof. The choice of the constant η ensures that each of the constant terms in the bound of Theorem 1 is O(ε). The final rate is now O ( ∆F0/(ηε) ) ; plugging in the value of η completes the proof.
We now prove Theorem 1.
Proof of Theorem 1. The proof mainly applies the smoothness upper bound to write out a descent condition with suitably small noise terms. We start with some notation.
Notation. Let F (t) denote the σ-algebra generated by ( u(t), V (t) ) and denote Et[·] = E[·|F (t)]. For all devices, including those not selected in each round, we define virtual sequences ũ(t)i,k, ṽ (t) i,k as the SGD updates in Algorithm 4 for all devices regardless of whether they are selected. For the selected devices k ∈ S(t), we have ( u
(t) i,k, v (t) i,k
) = ( ũ
(t) i,k, ṽ (t) i,k ) . Note now that the random variables ũ(t)i,k, ṽ (t) i,k
are independent of the device selection S(t). The updates for the devices i ∈ S(t) are given by
v (t+1) i = v (t) i − γv τ−1∑ k=0 Gi,v ( ũ (t) i,k, ṽ (t) i,k, z (t) i,k ) ,
and the server update is given by
u(t+1) = u(t) − γu m ∑ i∈S(t) τ−1∑ k=0 Gi,u ( ũ (t) i,k, ṽ (t) i,k, z (t) i,k ) . (9)
Proof Outline. We use the smoothness of Fi, more precisely Lemma 16, to obtain F ( u(t+1), V (t+1) ) − F ( u(t), V (t) ) ≤ 〈 ∇uF (u(t), V (t)), u(t+1) − u(t)
〉 ︸ ︷︷ ︸
T1,u
+ 1
n n∑ i=1 〈 ∇vFi(u(t), v(t)i ), v (t+1) i − v (t) i 〉 ︸ ︷︷ ︸
T1,v
+ Lu(1 + χ
2)
2 ∥∥∥u(t+1) − u(t)∥∥∥2︸ ︷︷ ︸ T2,u + 1 n n∑ i=1 Lv(1 + χ 2) 2 ∥∥∥v(t+1)i − v(t)i ∥∥∥2︸ ︷︷ ︸ T2,v .
(10)
Our goal will be to bound each of these terms to get a descent condition from each step of the form
Et
[ F ( u(t+1), V (t+1) ) − F ( u(t), V (t) )] ≤ −γuτ
8 ∥∥∥∇uF (u(t), V (t))∥∥∥2 − γvτm 8n2 n∑ i=1 ∥∥∥∇vFi(u(t), v(t)i )∥∥∥2 +O(γ2u + γ2v) ,
where the O(γ2u + γ 2 v) terms are controlled using the bounded variance and gradient diversity assumptions. Telescoping this descent condition gives the final bound.
Main Proof. Towards this end, we prove non-asymptotic bounds on each of the terms T1,v, T1,u, T2,v and T2,u, in Claims 4 to 7 respectively. We then invoke them to get the bound
Et
[ F ( u(t+1), V (t+1) ) − F ( u(t), V (t) )] ≤ −γuτ
4 ∆(t)u −
γvτm
4n ∆(t)v
+ Lu(1 + χ
2)γ2uτ 2
2
( σ2u + 12δ2
m (1−m/n)
) + Lv(1 + χ 2)γ2vτ 2σ2vm
2n
+ 2
n n∑ i=1 τ−1∑ k=0 Et ∥∥∥u(t)i,k − u(t)∥∥∥2 (L2uγu + mn χ2LuLvγv) + 2
n n∑ i=1 τ−1∑ k=0 Et ∥∥∥v(t)i,k − v(t)∥∥∥2 (mn L2vγv + χ2LuLvγu) . (11)
Note that we simplified some constants appearing on the gradient norm terms using
γu ≤ ( 12Lu(1 + χ 2)(1 + ρ2)τ )−1 and γv ≤ ( 6Lv(1 + χ 2)τ )−1 .
Our next step is to bound the last two lines of (11) with Lemma 8 and invoke the gradient diversity assumption (Assumption 3′) as
1
n n∑ i=1 ∥∥∥∇uFi(u(t), v(t)i )∥∥∥2 ≤ δ2 + (1 + ρ2)∥∥∥∇uF (u(t), V (t))∥∥∥2 . This gives, after plugging in the learning rates and further simplifying the constants,
Et
[ F ( u(t+1), V (t+1) ) − F ( u(t), V (t) )] ≤− c∆ (t) u
8Lu − cm∆
(t) v
8Lvn + c2(1 + χ2)
( σ2u
2Lu + mσ2v nLv + 6δ2 Lum
( 1− m
n )) + c3(1 + χ2)(1− τ−1) ( 24δ2
Lu + 4σ2u Lu + 4σ2v Lu
) .
Taking full expectation, telescoping the series over t = 0, · · · , T − 1 and rearranging the resulting terms give the desired bound in Theorem 1.
Claim 4 (Bounding T1,v). Let T1,v be defined as in (10). We have,
Et[T1,v] ≤ − γvτm
2n2 n∑ i=1 ∥∥∥∇vFi (u(t), v(t)i )∥∥∥2 + γvm
n n∑ i=1 τ−1∑ k=0 Et [ χ2LuLv ∥∥∥ũ(t)i,k − u(t)∥∥∥2 + L2v∥∥∥ṽ(t)i,k − v(t)i ∥∥∥2] .
Proof. Define T1,v,i to be contribution of the ith term to T1,v. For i /∈ St, we have that T1,v,i = 0, since v(t+1)i = v (t) i . On the other hand, for i ∈ S(t), we use the unbiasedness of the gradient estimator
Gi,v and the independence of z (t) i,k from u (t) i,k, v (t) i,k to get
Et [T1,v,i] = −γv τ−1∑ k=0 Et 〈 ∇vFi ( u(t), v (t) i ) ,∇vFi ( u (t) i,k, v (t) i,k )〉 = −γv
τ−1∑ k=0 Et 〈 ∇vFi ( u(t), v (t) i ) ,∇vFi ( ũ (t) i,k, ṽ (t) i,k )〉 =− γvτ
∥∥∥∇vFi (u(t), v(t)i )∥∥∥2 − γv
τ−1∑ k=0 Et 〈 ∇vFi ( u(t), v (t) i ) ,∇vFi ( ũ (t) i,k, ṽ (t) i,k ) −∇vFi ( u(t), v (t) i )〉 ≤ −γvτ
2 ∥∥∥∇vFi (u(t), v(t)i )∥∥∥2 + γv2 τ−1∑ k=0 Et ∥∥∥∇vFi (ũ(t)i,k, ṽ(t)i,k)−∇vFi (u(t), v(t)i )∥∥∥2 . (12) For the second term, we add and subtract∇vFi ( u(t), ṽ (t) i,k ) and use smoothness to get
∥∥∥∇vFi (ũ(t)i,k, ṽ(t)i,k)−∇vFi (u(t), v(t)i )∥∥∥2 ≤ 2χ2LuLv∥∥∥ũ(t)i,k − u(t)∥∥∥2 + 2L2v∥∥∥ṽ(t)i,k − v(t)i ∥∥∥2 . (13)
Since the right hand side of this bound is independent of St, we get,
Et[T1,v] = m
n Et 1 m ∑ i∈S(t) T1,v,i = m n2 n∑ i=1 Et[T1,v,i] ,
and plugging in (12) and (13) completes the proof.
Claim 5 (Bounding T1,u). Consider T1,u defined in (10). We have the bound,
Et[T1,u] ≤ − γuτ
2 ∥∥∥∇uF (u(t), V (t))∥∥∥2 + γu n n∑ i=1 τ−1∑ k=0 Et [ L2u ∥∥∥ũ(t)i,k − u(t)∥∥∥2 + χ2LuLv∥∥∥ṽ(t)i,k − v(t)i ∥∥∥2] .
Proof. Due to the independence of S(t) from ũ(t)i,k, ṽ (t) i,k, we have,
Et
[ u(t+1) − u(t) ] = −γuEt 1 m ∑ i∈S(t) τ−1∑ k=0 ∇uFi ( u (t) i,k, v (t) i,k ) = −γuEt 1 m ∑ i∈S(t) τ−1∑ k=0 ∇uFi ( ũ (t) i,k, ṽ (t) i,k
) = −γu
n n∑ i=1 τ−1∑ k=0 Et [ ∇uFi ( ũ (t) i,k, ṽ (t) i,k )] ,
where the last equality took an expectation over S(t), which is independent of ũ(t)i,k, ṽ (t) i,k. Now, using the same sequence of arguments as Claim 4, we have,
Et 〈 ∇uF ( u(t), V (t) ) , u(t+1) − u(t) 〉 = −γu
τ−1∑ k=0 Et
〈 ∇uF ( u(t), V (t) ) , 1
n n∑ i=1 ∇uFi ( ũ (t) i,k, ṽ (t) i,k
)〉
≤ −γuτ 2 ∥∥∥∇uF (u(t), V (t))∥∥∥2 + γu 2 τ−1∑ k=0 Et ∥∥∥∥∥ 1n n∑ i=1 ∇uFi ( ũ (t) i,k, ṽ (t) i,k ) −∇uF ( u(t), V (t) )∥∥∥∥∥ 2
(∗) ≤ −γuτ
2 ∥∥∥∇uF (u(t), V (t))∥∥∥2 + γu 2n n∑ i=1 τ−1∑ k=0 Et ∥∥∥∇uFi (ũ(t)i,k, ṽ(t)i,k)−∇uFi (u(t), v(t)i )∥∥∥2 ≤ −γuτ
2 ∥∥∥∇uF (u(t), V (t))∥∥∥2 + γu n n∑ i=1 τ−1∑ k=0 Et [ L2u ∥∥∥ũ(t)i,k − u(t)∥∥∥2 + L2uv∥∥∥ṽ(t)i,k − v(t)i ∥∥∥2] , where the inequality (∗) follows from Jensen’s inequality as∥∥∥∥∥ 1n n∑ i=1 ∇uFi ( ũ (t) i,k, ṽ (t) i,k ) −∇uF ( u(t), V (t) )∥∥∥∥∥ 2 ≤ 1 n n∑ i=1 ∥∥∥∇uFi (ũ(t)i,k, ṽ(t)i,k)−∇uFi (u(t)i,k, v(t))∥∥∥2 .
Claim 6 (Bounding T2,v). Consider T2,v as defined in (10). We have the bound,
Et[T2,v] ≤ 3Lv(1 + χ
2)γ2vτ 2m
2n2
n∑ i=1 ∥∥∥∇vFi (u(t), v(t)i )∥∥∥2 + Lv(1 + χ2)γ2vτ2mσ2v2n + 3Lv(1 + χ 2)γ2vτm
2n2
n∑ i=1 τ−1∑ k=0 Et [ L2v ∥∥∥ṽ(t)i,k − v(t)i ∥∥∥2 + χ2LuLv∥∥∥ũ(t)i,k − u(t)∥∥∥2] . Proof. We start with
Et ∥∥∥ṽ(t)k,τ − v(t)∥∥∥2 = γ2vEt ∥∥∥∥∥ τ−1∑ k=0 Gi,v ( ũ (t) i,k, ṽ (t) i,k, z (t) i,k )∥∥∥∥∥ 2
≤ γ2vτ τ−1∑ k=0 Et ∥∥∥Gi,v (ũ(t)i,k, ṽ(t)i,k, z(t)i,k)∥∥∥2 ≤ γ2vτ2σ2v + γ2vτ
τ−1∑ k=0 Et ∥∥∥∇vFi (ũ(t)i,k, ṽ(t)i,k)∥∥∥2 ≤ γ2vτ2σ2v + 3γ2vτ2
∥∥∥∇vFi (u(t), v(t)i )∥∥∥2 + 3γ2vτ
τ−1∑ k=0 Et [ L2v ∥∥∥ṽ(t)i,k − v(t)i ∥∥∥2 + χ2LuLv∥∥∥ũ(t)i,k − u(t)∥∥∥2] . Using (a) v(t+1)i = ṽ (t) i,τ for i ∈ S(t), and, (b) S(t) is independent from ũ (t) i,k, ṽ (t) i,k, we get,
Et[T2,v] = Lv(1 + χ
2)m
2n Et 1 m ∑ i∈S(t) ∥∥∥ṽ(t)i,τ − v(t)i ∥∥∥2
≤ Lv(1 + χ 2)m
2n2
n∑ i=1 Et ∥∥∥ṽ(t)i,τ − v(t)i ∥∥∥2 Plugging in the bound Et ∥∥∥ṽ(t)i,τ − v(t)∥∥∥2 completes the proof.
Claim 7 (Bounding T2,u). Consider T2,u as defined in (10). We have,
Et[T2,u] ≤ Lu(1 + χ
2)γ2uτ 2
2m
( σ2u + 12δ 2 (
1− m n )) + 3Lu(1 + χ 2)γ2uτ 2(1 + ρ2)
∥∥∥∇uFi (u(t), V (t))∥∥∥2 + 3Lu(1 + χ 2)γ2uτ
2n
n∑ i=1 τ−1∑ k=0 Et [ L2u ∥∥∥ũ(t)i,k − u(t)∥∥∥2 + χ2LuLv∥∥∥ṽ(t)i,k − v(t)i ∥∥∥2] . Proof. We proceed with the first two inequalities as in the proof of Claim 6 to get
Et ∥∥∥u(t+1) − u(t)∥∥∥2 ≤ γ2uτ2σ2u m + γ2uτ τ−1∑ k=0 Et ∥∥∥∥∥∥ 1m ∑ i∈S(t) ∇uFi ( ũ (t) i,k, ṽ (t) i,k )∥∥∥∥∥∥ 2
︸ ︷︷ ︸ =:T3,j
.
For T3,j , (a) we add and subtract ∇uF (u(t), V (t)) and ∇uFi(u(t), ṽ(t)i,k), (b) invoke the squared triangle inequality, and, (c) use smoothness to get
T3,j = 6Et ∥∥∥∥∥∥ 1m ∑ i∈S(t) ∇uFi ( u(t), v (t) i ) −∇uF ( u(t), V (t) )∥∥∥∥∥∥ 2 + 6 ∥∥∥∇uF (u(t), V (t))∥∥∥2
+ 3Et 1 m ∑ i∈S(t) ( L2u ∥∥∥ũ(t)i,k − u(t)∥∥∥2 + χ2LuLv∥∥∥ṽ(t)i,k − v(t)i ∥∥∥2)
For the first term, we use the fact that S(t) is obtained by sampling without replacement to apply Lemma 17 together with the gradient diversity assumption to get
Et ∥∥∥∥∥∥ 1m ∑ i∈S(t) ∇uFi ( u(t), v (t) i ) −∇uF ( u(t), V (t) )∥∥∥∥∥∥ 2
≤ 1 m ( n−m n− 1 ) 1 n n∑ i=1 ∥∥∥∇uFi (u(t), v(t)i )−∇uF (u(t), V (t))∥∥∥2 ≤ 1 m ( n−m n− 1 )( δ2 + ρ2
∥∥∥∇uF (u(t), V (t))∥∥∥2) . Therefore,
T3,j = 12δ2
m
( 1− m
n
) + 6(1 + ρ2) ∥∥∥∇uF (u(t), V (t))∥∥∥2 + 3
n n∑ i=1 Et [ L2u ∥∥∥ũ(t)i,k − u(t)∥∥∥2 + χ2LuLv∥∥∥ṽ(t)i,k − v(t)i ∥∥∥2] , where we also used the independence between S(t) and (ũ(t)i,k, ṽ (t) i,k). Plugging this into the expression for Et‖u(t+1) − u(t)‖2 completes the proof.
Lemma 8. Let Fi satisfy Assumptions 1′-3′, and consider the iterates
uk+1 = uk − γuGi,u(uk, vk, zk) , and, vk+1 = vk − γvGi,v(uk, vk, zk) ,
for k = 0, · · · , τ − 1, where zk ∼ Di. Suppose the learning rates satisfy γu = cu/(τLu) and γv = cv/(τLv) with cu, cv ≤ 1/ √ 6 max{1, χ−2}. Further, define,
A = γuL 2 u + fχ 2γvLuLv , and, B = fγvL2v + χ 2γuLuLv ,
where f ∈ (0, 1] is given. Then, we have the bound,
τ−1∑ k=0 E [ A‖uk − u0‖2+B‖vk − u0‖2 ] ≤ 4τ2(τ − 1) ( γ2uσ 2 uA+ γ 2 vσ 2 vB )
+ 12τ2(τ − 1) ( γ2uA‖∇uFi(u0, v0)‖2 + γ2vB‖∇vFi(u0, v0)‖2 ) .
Proof. If τ = 1, there is nothing to prove, so we assume τ > 1. Let ∆k := A‖uk − u0‖2 +B‖vk − v0‖2 and denote by Fk the sigma-algebra generated by (wk, vk). Further, let Ek[·] = E[·|Fk]. We use the inequality 2αβ ≤ α2/δ2 + δ2β2 for reals α, β, δ to get,
Ek‖uk+1 − u0‖2 ≤ ( 1 + 1
τ − 1
) ‖uk − u0‖2 + τγ2uEk‖Gi,u(uk, vk, zk)‖ 2
≤ ( 1 + 1
τ − 1
) ‖uk − u0‖2 + τγ2uσ2u + τγ2u‖∇uFi(uk, vk)‖ 2
≤ ( 1 + 1
τ − 1
) ‖uk − u0‖2 + τγ2uσ2u + 3τγ2u‖∇uFi(u0, v0)‖ 2
+ 3τγ2uL 2 u‖uk − u0‖2 + 3τγ2uLuv‖vk − v0‖2 ,
where the last inequality followed from the squared triangle inequality (from adding and subtracting ∇uFi(u0, vk) and∇uFi(u0, v0)) followed by smoothness. Together with the analogous inequality for the v-update, we get,
Ek[∆k+1] ≤ ( 1 + 1
τ − 1
) ∆k +A ′‖uk − u0‖2 +B′‖vk − v0‖2 + C ,
where we have
A′ = 3τ(γ2uL 2 uA+ γ 2 vχ 2LuLvB), and, B′ = 3τ(γ2vL 2 vB + γ 2 uχ 2LuLvA) and,
C ′ = τγ2uσ 2 uA+ τγ 2 vσ 2 vB + 3τγ 2 uA‖∇uFi(u0, v0)‖2 + 3τγ2vB‖∇vFi(u0, v0)‖2 .
Next, we apply Lemma 20 to get that A′ ≤ A/τ and B′ ≤ B/τ under the assumed conditions on the learning rates; this allows us to write the right hand side completely in terms of ∆k and unroll the recurrence. The intuition behind Lemma 20 is as follows. Ignoring the dependence on τ, Lu, Lv, χ for a moment, if γu and γv are both O(η), then A′, B′ are both O(η3), while A and B are O(η). Thus, making η small enough should suffice to get A′ ≤ O(A) and B′ ≤ O(B). Concretely, Lemma 20 gives
Ek[∆k+1] ≤ ( 1 + 2
τ − 1
) E[∆k] + C ,
and unrolling this recurrence gives for k ≤ τ − 1
E[∆k] ≤ k−1∑ j=0 ( 1 + 2 τ − 1 )j C ≤ τ − 1 2 ( 1 + 2 τ − 1 )k C
≤ τ − 1 2
( 1 + 2
τ − 1
)τ−1 C ≤ e 2
2 (τ − 1)C ,
where we used (1 + 1/α)α ≤ e for all α > 0. Summing over k and using the numerical bound e2 < 8 completes the proof.
Remark 9. We only invoked the partial gradient diversity assumption (Assumption 3) at iterates (u(t), V (t)); therefore, it suffices if the assumption only holds at iterates (u(t), V (t)) generated by FedSim, rather than at all (u, V ).
Algorithm 5 FedAlt: Alternating updates of shared and personalized parameters
Input: Initial iterates u(0), V (0), Number of communication rounds T , Number of devices per round m, Number of local updates τu, τv , Local step sizes γu, γv ,
1: for t = 0, 1, · · · , T − 1 do 2: Sample m devices from [n] without replacement in S(t) 3: for each selected device i ∈ S(t) in parallel do 4: Initialize v(t)i,0 = v (t) i 5: for k = 0, · · · , τv − 1 do . Update personalized parameters 6: Sample data z(t)i,k ∼ Di 7: v(t)i,k+1 = v (t) i,k − γvGi,v(u(t), v (t) i,k, z (t) i,k) 8: Update v(t+1)i = v (t) k,τv 9: Initialize u(t)i,0 = u (t)
10: for k = 0, · · · , τu − 1 do . Update shared parameters 11: u(t)i,k+1 = u (t) i,k − γuGi,u(u (t) i,k, v (t+1) i , z (t) i,k) 12: Update u(t+1)i = u (t) i,τu
13: Update u(t+1) = ∑ i∈S(t) αiu (t+1) i∑
i∈S(t) αi at the server with secure aggregation
14: return u(T ), v(T )1 , · · · , v (T ) n
A.3 CONVERGENCE ANALYSIS OF FEDALT
We give the full form of FedAlt in Algorithms 5 for the general case of unequal αi’s but focus on αi = 1/n for the analysis. For convenience, we reiterate Theorem 2 below. Recall the definitions
∆(t)u = ∥∥∥∇uF (u(t), V (t+1))∥∥∥2 , and, ∆(t)v = 1n n∑ i=1 ∥∥∥∇vFi (u(t), v(t)i )∥∥∥2 . Theorem 2 (Convergence of FedAlt). Suppose Assumptions 1, 2 and 3 hold and the learning rates in FedAlt are chosen as γu = η/(Luτu) and γv = η/(Lvτv), with
η ≤ min {
1 24(1+ρ2) , m 128χ2(n−m) , √ m χ2n } .
Then, ignoring absolute constants, we have
1 T ∑T−1 t=0 ( 1 Lu E [ ∆ (t) u ] + mnLv E [ ∆ (t) v ]) ≤ ∆F0
ηT + η
( σ2u+δ
2(1−mn ) mLu + σ2v Lv m+χ2(n−m) n ) + η2 ( σ2u+δ 2
Lu (1− τ−1u ) + σ2vm Lvn (1− τ−1v ) + χ2σ2v Lv
) .
Before proving the theorem, we have the corollary with optimized learning rates.
Corollary 10. Consider the setting of Theorem 2 and fix some ε > 0. Suppose we set γu = η/(τLu) and γv = η/(τLv) such that, ignoring absolute constants,
η = ( σ2v εLv (m n + χ2(1−m/n) ))−1∧(σ2u + δ2(1−m/n) mLuε )−1∧(σ2u + δ2 Luε (1− τ−1u ) )−1/2
∧( σ2vm Lvnε (1− τ−1v ) )−1/2∧ 1 1 + ρ2 ∧ m χ2(n−m) ∧√ m χ2n .
Then, we have,
1
T T−1∑ t=0
( 1 Lu E ∥∥∥∇uF (u(t), Ṽ (t))∥∥∥2 + m Lvn2 n∑ i=1 E ∥∥∥∇vFi (u(t), v(t)i )∥∥∥2 ) ≤ ε
after T communication rounds, where, ignoring absolute constants,
T ≤ ∆F0 ε2
( σ2u + δ 2 ( 1− mn ) mLu + σ2v Lv (m n + χ2 ( 1− m n )))
+ ∆F0 ε3/2 ( σu + δ√ Lu √ 1− τ−1u + σv√ Lv √ 1− τ−1v ) +
∆F0 ε
( 1 + ρ2 + χ2 ( n m − 1 ) + √ χ2n m ) .
Proof. We get the bound by balancing terms from the bound of Theorem 2. The choice of η ensures that all the O(η) and O(η2) terms are at most O(ε). Finally, the smallest number of communication rounds to make the left hand side of the bound of Theorem 2 smaller than ε is ∆F0/(ηε).
We are now ready to prove Theorem 2.
Proof of Theorem 2. The proof mainly applies the smoothness upper bound to write out a descent condition with suitably small noise terms. We start with some notation.
We introduce the notation ∆̃(t)u as the analogue of ∆ (t) u with the virtual variable Ṽ (t+1):
∆̃(t)u = ∥∥∥∇uF (u(t), V (t))∥∥∥2 .
Notation. Let F (t) denote the σ-algebra generated by ( u(t), V (t) ) and denote Et[·] = E[·|F (t)]. For all devices, including those not selected in each round, we define virtual sequences ũ(t)i,k, ṽ (t) i,k as the SGD updates in Algorithm 5 for all devices regardless of whether they are selected. For the selected devices i ∈ S(t), we have v(t)i,k = ṽ (t) i,k and u (t) i,k = ũ (t) i,k. Note now that the random variables ũ (t) i,k, ṽ (t) i,k are independent of the device selection S(t). Finally, we have that the updates for the selected devices i ∈ S(t) are given by
v (t+1) i = v (t) i − γv τv−1∑ k=0 Gi,v ( u(t), ṽ (t) i,k, z (t) i,k ) ,
and the server update is given by
u(t+1) = u(t) − γu m ∑ i∈S(t) τu−1∑ k=0 Gi,u ( ũ (t) i,k, ṽ (t) i,τv , z (t) i,k ) .
Proof Outline and the Challenge of Dependent Random Variables. We start with F ( u(t+1), V (t+1) ) − F ( u(t), V (t) ) =F ( u(t), V (t+1) ) − F ( u(t), V (t) ) + F ( u(t+1), V (t+1) ) − F ( u(t), V (t+1) ) .
(14)
The first line corresponds to the effect of the v-step and the second line to the u-step. The former is easy to handle with standard techniques that rely on the smoothness of F ( u(t), · ) . The latter is more challenging. In particular, the smoothness bound for the u-step gives us
F ( u(t+1), V (t+1) ) − F ( u(t), V (t+1) ) ≤ 〈 ∇uF ( u(t), V (t+1) ) , u(t+1) − u(t) 〉 + Lu 2
∥∥∥u(t+1) − u(t)∥∥∥2 . The standard proofs of convergence of stochastic gradient methods rely on the fact that we can take an expectation w.r.t. the sampling S(t) of devices for the first order term. However, both V (t+1) and
u(t+1) depend on the sampling S(t) of devices. Therefore, we cannot directly take an expectation with respect to the sampling of devices in S(t).
Virtual Full Participation to Circumvent Dependent Random Variables. The crux of the proof lies in replacing V (t+1) in the analysis of the u-step with the virtual iterate Ṽ (t+1) so as to move all the dependence of the u-step on S(t) to the u(t+1) term. This allows us to take an expectation; it remains to carefully bound the resulting error terms.
Finally, we will arrive at a bound of | 1. What is the focus of the paper regarding collaborative federated learning?
2. What are the strengths and weaknesses of the proposed personalization schema?
3. How does the reviewer assess the theoretical analysis and empirical studies presented in the paper?
4. Are there any concerns or questions regarding the practical implementation and effectiveness of the proposed method? | Summary Of The Paper
Review | Summary Of The Paper
To overcome statistical heterogeneity among data shards in collaborative federated learning, this paper proposes a novel personalization schema which only requires smaller memory footprint in clients and possibly is less susceptible to catastrophic forgetting. The main idea is to split the trainable parameters into shared and personalized parameters where, unlike existing personalization schema, only the shared model is exchanged with the server in communication rounds to be aggregated- thus partial personalization. In regression, this corresponds to learning the residual error of shared model via personalized model and in the classification setting corresponds to output averaging (unlike interpolation based personalization methods that do parameter mixing). The authors propose two algorithms FedSim (simultaneous updating of shared and personalized models locally) and FedAlt (alternative updating of shared and personalized models locally), to learn in this setting, theoretically analyze the convergence rates in non-convex settings and conduct empirical studies on image classification and next-word prediction to evaluate the proposed methods.
Review
This paper studies an interesting question and proposed idea looks interesting with corroborating empirical results, but there are few issues that prevents me from giving it a high score:
The key motivation for the paper is to utilize a small footprint in clients but the proposal is somehow mis-leading. I understand the learned personalized model is significantly smaller than the shared model (e.g. 1-2% as observed in experiments for some applications) and requires a small footprint after deployment (inference stage), but during the training the clients need enough memory for both models to participate in collaborative training. Also, I was left wondering how large the coupling between personalized and shared parameters captured by
χ
would be in presence of heterogeneity and significant difference in model sizes.
While the proof of theoretical results looks sound as far as I checked, the obtained rates are hard to interpret and poorly elaborated. For example, the gradient diversity assumption (Assumption 3) is only defined over the shared model. Also, the role of gradient diversity term, \rho, which appears in tuning the learning rate
η
is ignored in discussions (e.g. Table 1) which might dominate other terms if incorporated properly. So, it seems hard to put the obtained rates in the context of known results. |
ICLR | Title
Federated Learning with Partial Model Personalization
Abstract
We propose and analyze a general framework of federated learning with partial model personalization. Compared with full model personalization, partial model personalization relies on domain knowledge to select a small portion of the model to personalize, thus imposing a much smaller on-device memory footprint. We propose two federated optimization algorithms for training partially personalized models, where the shared and personal parameters are updated either simultaneously or alternately on each device, but only the shared parameters are communicated and aggregated at the server. We give convergence analyses of both algorithms for minimizing smooth nonconvex functions, providing theoretical support of them for training deep learning models. Our experiments on real-world image and text datasets demonstrate that (a) partial model personalization can obtain most of the benefit of full model personalization with a small fraction of personalized parameters, and, (b) the alternating update algorithm often outperforms the simultaneous update algorithm.
1 INTRODUCTION
Federated Learning (McMahan et al., 2017) has emerged as a powerful paradigm for distributed and privacy-preserving machine learning over a large number of edge devices (see Kairouz et al., 2021, and references therein). We consider a typical setting of Federated Learning (FL) with n devices (also called clients), where each device i has a training dataset of Ni samples zi,1, · · · , zi,Ni . Let w ∈ Rd represent the parameters of a (supervised) learning model and fi(w, zi,j) be the loss of the model on the training example zi,j . Then the loss function associated with device i is Fi(w) = (1/Ni) ∑Ni j=1 fi(w, zi,j). A common objective of FL is to find model parameters that minimize the weighted average loss across all devices (without transferring the datasets):
minimize w n∑ i=1 αiFi(w), (1)
where weights αi are nonnegative and satisfy ∑n i=1 αi = 1. A common practice is to choose the
weights as αi = Ni/N where N = ∑n k=1Nk, which corresponds to minimizing the unweighted
average loss across all samples from the n devices: (1/N) ∑n i=1 ∑Ni j=1 fi(w, zi,j).
The main motivation for minimizing the average loss over all devices is to leverage their collective statistical power for better generalization, because the amount of data on each device can be very limited. This is especially important for training modern deep learning models with large number of parameters. However, this argument assumes that the datasets from different devices are sampled from the same, or at least very similar, distributions. Given the diverse characteristics of the users and increasing trend of personalized on-device services, such an i.i.d. assumption may not hold in practice. Thus, the one-model-fits-all formulation in (1) can be less effective and even undesirable.
Several approaches have been proposed for personalized FL, including ones based on multi-task learning (Smith et al., 2017), meta learning (Fallah et al., 2020), and proximal methods (Dinh et al., 2020; Li et al., 2021). A simple formulation that captures their main idea is
minimize w0,{wi}ni=1 n∑ i=1 αi ( Fi(wi) + λi 2 ‖wi − w0‖2 ) , (2)
where wi for i = 1, . . . , n are personalized model parameters at the devices, w0 is a reference model maintained by the server, and the λi’s are regularization weights that control the extent of personalization. A major disadvantage of the formulation (2), which we call full model personalization, is that it requires twice the memory footprint of the model, wi and w0 at each device, which severely limits the size of trainable models. On the other hand, the flexibility of full model personalization can be unnecessary. Modern deep learning models are composed of many simple functional units and are typically organized into layers or a more general interconnected architecture. Personalizing the “right” components, selected with domain knowledge, may result in a substantial benefit with only a small increase in memory footprint. In addition, partial model personalization can be less susceptible to “catastrophic forgetting” (McCloskey & Cohen, 1989), where a large model finetuned on a small local dataset forgets the original (non-personalized) task, leading to a degradation of test performance.
We propose a framework for FL with partial model personalization. Specifically, we partition the model parameters into two groups: the shared parameters u ∈ Rd0 and the personal parameters vi ∈ Rdi for i = 1, . . . , n. The full model on device i is denoted as wi = (u, vi), and the local loss function is Fi(u, vi) = (1/Ni) ∑Ni i=1 fi ( (u, vi), zi,j ) . Our goal is to solve the optimization problem
minimize u, {vi}ni=1 n∑ i=1 αiFi(u, vi). (3)
Notice that the dimensions of vi can be different across the devices, allowing the personal components of the model to have different number of parameters or even different architecture.
We investigate two FL algorithms for solving problem (3): FedSim, a simultaneous update algorithm and FedAlt, an alternating update algorithm. Both algorithms follow the standard FL protocol. During each round, the server randomly selects a subset of the devices for update and broadcasts the current global version of the shared parameters to devices in the subset. Each selected device then performs one or more steps of (stochastic) gradient descent to update both the shared parameters and the personal parameters, and sends the updated shared parameters to the server for aggregation. The updated personal parameters are kept local at the device to serve as the initial states when the device is selected for another update. In FedSim, the shared and personal parameters are updated simultaneously during each local iteration. In FedAlt, the devices first update the personal parameters with the received shared parameters fixed and then update the shared parameters with the new personal parameters fixed. We provide convergence analysis and empirical evaluation of both methods.
The main contributions of this paper are summarized as follows:
• We propose a general framework of FL with partial model personalization, which relies on domain knowledge to select a small portion of the model to personalize, thus imposing a much smaller memory footprint on the devices than full model personalization. This framework unifies existing work on personalized FL and allows arbitrary partitioning of deep learning models.
• We provide convergence guarantees for the FedSim and FedAlt methods in the general (smooth) nonconvex setting. While both methods have appeared in the literature previously, they are either used without convergence analysis or with results on limited settings (assuming convexity or full participation) Our analysis provides theoretical support for the general nonconvex setting with partial participation. The analysis of FedAlt with partial participation is especially challenging and we develop a novel technique of virtual full participation to overcome the difficulties.
• We conduct extensive experiments on image classification and text prediction tasks, exploring different model personalization strategies for each task, and comparing with several strong baselines. Our results demonstrate that partial model personalization can obtain most of the benefit of full model personalization with a small fraction of personalized parameters, and FedAlt often outperforms FedSim.
• Our experiments also reveal that personalization (full or partial) may lead to worse performance for some devices, despite improving the average. Typical forms of regularization such as weight decay and dropout do not mitigate this issue. This phenomenon has been overlooked in previous work and calls for future research to improve both performance and fairness.
Related work. Specific forms of partial model personalization have been considered in previous works. Liang et al. (2019) propose to personalize the input layers to learn a personalized representation per-device (Figure 1a), while Arivazhagan et al. (2019) and Collins et al. (2021) propose to personalize the output layer while learning a shared representation with the input layers (Figure 1b). Both FedSim and FedAlt have appeared in the literature before, but the scope of their convergence analysis is limited. Specifically, Liang et al. (2019), Arivazhagan et al. (2019) and Hanzely et al. (2021) use FedSim, while Collins et al. (2021) and Singhal et al. (2021) proposed variants of FedAlt. Notably, Hanzely et al. (2021) establish convergence of FedSim with full device participation in the convex and non-convex cases, while Collins et al. (2021) prove the linear convergence of FedAlt for a two-layer linear network where Fi(·, vi) and Fi(u, ·) are both convex for fixed vi and u respectively. We analyze both FedSim and FedAlt in the general nonconvex case with partial device participation, hence addressing a more general and practical setting.
While we primarily consider the problem (3) in the context of partial model personalization, it can serve as a general formulation that covers many other problems. Hanzely et al. (2021) demonstrate that various full model personalization formulations based on regularization (Dinh et al., 2020; Li et al., 2021), including (2), as well as interpolation (Deng et al., 2020; Mansour et al., 2020) are special cases of this problem. The rates of convergence we prove in §3 are competitive with or better than those in previous works for full model personalization methods in the non-convex case.
2 PARTIALLY PERSONALIZED MODELS
Modern deep learning models all have a multi-layer architecture. While a complete understanding of why they work so well is still out of reach, a general insight is that the lower layers (close to the input) are mostly responsible for feature extraction and the upper layers (close to the output) focus on complex pattern recognition. Depending on the application scenarios and domain knowledge, we may personalize either the input layer(s) or the output layer(s) of the model; see Figure 1.
In Figure 1c, the input layers are split horizontally into two parts, one shared and the other personal. They process different chunks of the input vector and their outputs are concatenated before feeding
Algorithm 1 Federated Learning with Partial Model Personalization (FedSim / FedAlt)
Input: initial states u(0), {v(0)i }ni=1, number of rounds T , number of devices per round m 1: for t = 0, 1, · · · , T − 1 do 2: server randomly samples m devices as S(t) ⊂ {1, . . . , n} 3: server broadcasts u(t) to each device in S(t) 4: for each device i ∈ S(t) in parallel, do 5: ( u
(t+1) i , v (t+1) i
) = LocalSim / LocalAlt ( u(t), v
(t) i
) . v
(t+1) i = v (t) i if i /∈ S(t)
6: send u(t+1)i back to server 7: server updates u(t+1) = 1m ∑ i∈S(t) u (t+1) i
to the upper layers of the model. As demonstrated in (Bui et al., 2019), this partitioning can help protect user-specific private features (input 2 in Figure 1c) as the corresponding feature embedding (through vi) are personalized and kept local at the device. Similar architectures have also been proposed in context-dependent language models (e.g., Mikolov & Zweig, 2012).
A more structured partitioning is illustrated in Figure 2a, where a typical transformer layer (Vaswani et al., 2017) is augmented with two adapters. This architecture is proposed by Houlsby et al. (2019) for finetuning large language models. Similar residual adapter modules are proposed by Rebuffi et al. (2017) for image classification models in the context of multi-task learning. In the context of FL, we treat the adapter parameters as personal and the rest of the model parameters as shared.
Figure 2b shows a generalized additive model, where the outputs of two separate models, one shared and the other personalized, are fused to generate a prediction. Suppose the shared model is h(u, ·) and the personal model is hi(vi, ·). For regression tasks with samples zi,j = (xi,j , yi,j), where xi,j is the input and yi,j ∈ Rp is the output, we let Fi(u, vi) = (1/Ni) ∑Ni j=1 fi ( (u, vi), zi,j ) with
fi ( (u, vi), zi,j ) = ‖yi,j − h(u, xi,j)− hi(vi, xi,j)‖2 .
In this special case, the personal model fits the residual of the shared model and vice-versa (Agarwal et al., 2020). For classification tasks, h(u, ·) and hi(vi, ·) produce probability distributions over multiple classes. We can use the cross-entropy loss between yi,j and a convex combination of the two model outputs: θh(u, xi,j) + (1− θ)hi(vi, xi,j), where θ ∈ (0, 1) is a learnable parameter. Finally, we can cast the formulation (2) of full model personalization as a special case of (3) by letting
u← w0, vi ← wi, Fi(u, vi)← Fi(vi) + (λi/2)‖vi − u‖2. Many other formulations of full personalization can be reduced to (3); see Hanzely et al. (2021).
3 ALGORITHMS AND CONVERGENCE ANALYSIS
In this section, we present and analyze two FL algorithms for solving problem (3). To simplify presentation, we denote V = (v1, . . . , vn) ∈ Rd1+...+dn and focus on the case of αi = 1/n, i.e.,
minimizeu, V F (u, V ) := 1 n ∑n i=1 Fi(u, vi). (4)
This is equivalent to (3) if we scale Fi by nαi, thus does not lose generality. Moreover, we consider more general local functions Fi(u, vi) = Ez∼Di [fi((u, vi), z)], where Di is the local distribution. The FedSim and FedAlt algorithms share a common outer-loop description given in Algorithm 1. They differ only in the local update procedures LocalSim and LocalAlt, which are given in Algorithm 2 and Algorithm 3 respectively. In the two local update procedures, ∇̃u and ∇̃v represent stochastic gradients with respect to w and vi respectively. In LocalSim (Algorithm 2), the personal variables vi and local version of the shared parameters ui are updated simultaneously, with their (stochastic) partial gradients evaluated at the same point. In LocalAlt (Algorithm 3), the personal parameters are updated first with the received shared parameters fixed, then the shared parameters are updated with the new personal parameters fixed. They are analogous to the classical Jacobi update and Gauss-Seidel update in numerical linear algebra (e.g., Demmel, 1997, §6.5).
In order to analyze the convergence of the two algorithms, we make the following assumptions.
Algorithm 2 LocalSim ( u, vi ) Input: number of steps τ , step sizes γv and γu
1: initialize vi,0 = vi 2: initialize ui,0 = u 3: for k = 0, 1, · · · , τ − 1 do 4: vi,k+1 = vi,k − γv∇̃vFi ( ui,k, vi,k
) 5: ui,k+1 = ui,k − γu∇̃uFi ( ui,k, vi,k
) 6: update v+i = vi,τ 7: update u+i = ui,τ 8: return ( u+i , v + i )
Algorithm 3 LocalAlt ( u, vi ) Input: number of steps τv, τu, step sizes γv, γu
1: initialize vi,0 = vi 2: for k = 0, 1, · · · , τv−1 do 3: vi,k+1 = vi,k − γv∇̃vFi ( u, vi,k ) 4: update v+i = vi,τv and initialize ui,0 = u 5: for k = 0, 1, · · · , τu−1 do 6: ui,k+1 = ui,k − γu∇̃uFi ( ui,k, v + i
) 7: update u+i = ui,τu 8: return ( u+i , v + i
) Assumption 1 (Smoothness). The function Fi is continuously differentiable for each i = 1, . . . , n, and there exist constants Lu, Lv , Luv and Lvu such that for each i = 1, . . . , n, it holds that
• ∇uFi(u, vi) is Lu-Lipschitz with respect to u and Luv-Lipschitz with respect to vi; • ∇vFi(u, vi) is Lv-Lipschitz with respect to vi and Lvu-Lipschitz with respect to u.
Due to the definition of F (u, V ) in (4), it is easy to verify that∇uF (u, V ) has Lipschitz constant Lu with respect to u, Luv/ √ n with respect to V , and Luv/n with respect to any vi. We also define
χ := max{Luv, Lvu} /√
LuLv, (5) which measures the relative cross-sensitivity of ∇uFi with respect to vi and ∇vFi with respect to u. Assumption 2 (Bounded Variance). The stochastic gradients in Algorithm 2 and Algorithm 3 are unbiased and have bounded variance. That is, for all u and vi,
E [ ∇̃uFi(u, vi) ] = ∇uFi(u, vi), E [ ∇̃vFi(u, vi) ] = ∇vFi(u, vi) .
Furthermore, there exist constants σu and σv such that E [∥∥∇̃uFi(u, vi)−∇uFi(u, vi)∥∥2] ≤ σ2u , E[∥∥∇̃vFi(u, vi)−∇vFi(u, vi)∥∥2] ≤ σ2v .
We can view ∇uFi(u, vi), when i is randomly sampled from {1, . . . , n}, as a stochastic partial gradient of F (u, V ) with respect to u. The following assumption imposes a variance bound. Assumption 3 (Partial Gradient Diversity). There exist δ ≥ 0 and ρ ≥ 0 such that for all u and V ,
1 n ∑n i=1 ∥∥∇uFi(u, vi)−∇uF (u, V )∥∥2 ≤ δ2 + ρ2∥∥∇uF (u, V )∥∥2 . With ρ = 0, this assumption is similar to a constant variance bound on the stochastic gradient ∇uFi(u, vi); with ρ > 0, it allows the variance to grow with the norm of the full gradient.
Throughout this paper, we assume F is bounded below by F ? and denote ∆F0 = F ( u(0), V (0) ) −F ?. Further, we use shorthand V (t) = (v(t)1 , . . . , v (t) n ) and
∆ (t) u = ∥∥∇uF (u(t), V (t))∥∥2 , and ∆(t)v = 1n∑ni=1∥∥∇vFi(u(t), v(t)i )∥∥2 . For smooth and nonconvex loss functions Fi, we obtain convergence in expectation to a stationary point of F if the expected values of these two sequences converge to zero.
We first present our main result for FedSim (Algorithm 1 with LocalSim), proved in Appendix A.2. Theorem 1 (Convergence of FedSim). Suppose Assumptions 1, 2 and 3 hold and the learning rates in FedSim are chosen as γu = η/(Luτ) and γv = η/(Lvτ) with
η ≤ min {
1 12(1+χ2)(1+ρ2) ,
√ m/n
196(1−τ−1)(1+χ2)(1+ρ2)
} .
Then, ignoring absolute constants, we have
1 T ∑T−1 t=0 ( 1 Lu E [ ∆ (t) u ] + mnLv E [ ∆ (t) v ]) ≤ ∆F0ηT + η(1 + χ 2) ( σ2u+δ 2(1−mn ) mLu + mσ2v nLv ) + η2(1− τ−1)(1 + χ2) ( σ2u+δ 2
Lu + σ2v Lv
) . (6)
The left-hand side of (6) is the average over time of a weighted sum of E [ ∆ (t) u ] and E [ ∆ (t) v ] . The right-hand side contains three terms of order O(1/(ηT )), O(η) and O(η2) respectively. We can minimize the right-hand side by optimizing over η. By considering special cases such as σ2u = σ 2 v = 0 and m = n, some terms on the right-hand side disappear and we can obtain improved rates. Table 1 shows the results in several different regimes along with the optimal choices of η.
Challenge in Analyzing FedAlt. We now turn to FedAlt. Note that the personal parameters are updated only for the m selected devices in S(t) in each round t. Specifically,
v (t+1) i =
{ v
(t) i − γv ∑τv k=0 ∇̃vFi ( u(t), v (t) i,k ) if i ∈ S(t), v (t) i if i /∈ S(t).
Consequently, the vector V (t+1) of personal parameters depends on the random variable S(t). This makes it challenging to analyze the u-update steps of FedAlt because they are performed after V (t+1) is generated (as opposed to simultaneously in FedSim). When we take expectations with respect to the sampling of S(t) in analyzing the u-updates, V (t+1) becomes a dependent random variable, which prevents standard proof techniques from going through (see details in Appendix A.3).
We develop a novel technique called virtual full participation to overcome this challenge. Specifically, we define a virtual vector Ṽ (t+1), which is the result if every device were to perform local v-updates. It is independent of the sampling of S(t) and we can derive a convergence rate for related quantities. We carefully translate this rate from the virtual Ṽ (t+1) to the actual V (t) to get the following result. Theorem 2 (Convergence of FedAlt). Suppose Assumptions 1, 2 and 3 hold and the learning rates in FedAlt are chosen as γu = η/(Luτu) and γv = η/(Lvτv), with
η ≤ min {
1 24(1+ρ2) , m 128χ2(n−m) , √ m χ2n } .
Then, ignoring absolute constants, we have
1 T ∑T−1 t=0 ( 1 Lu E [ ∆ (t) u ] + mnLv E [ ∆ (t) v ]) ≤ ∆F0
ηT + η
( σ2u+δ
2(1−mn ) mLu + σ2v Lv m+χ2(n−m) n ) + η2 ( σ2u+δ 2
Lu (1− τ−1u ) + σ2vm Lvn (1− τ−1v ) + χ2σ2v Lv
) .
The proof of Theorem 2 is given in Appendix A.3. Similar to the results for FedSim, we can choose η to minimize the above upper bound to obtain the best convergence rate, as summarized in Table 1.
Comparing FedSim and FedAlt. Table 1 shows that both FedSim and FedAlt exhibit the standard O(1/ √ T ) rate in the general case. Comparing the constants in their rates, we identify two regimes in terms of problem parameters. The regime where FedAlt dominates FedSim is characterized by σ2v Lv ( 1− 2mn ) < σ2u+δ 2(1−m/n) mLu . A practically relevant scenario where this is true is σ2v ≈ 0 and σ2u ≈ 0 from using large or full batch on a small number of samples per device. Here, the rate of FedAlt is better than FedSim by a factor of (1 + χ2), indicating that the rate of FedAlt is less affected by the coupling between the personal and shared parameters. Our experiments in §4 corroborate the practical relevance of this regime.
The rates from Table 1 also apply for full personalization schemes without convergence guarantees in the nonconvex case (Agarwal et al., 2020; Mansour et al., 2020; Li et al., 2021). Our rates are better than those of (Dinh et al., 2020) for their pFedMe objective.
4 EXPERIMENTS
In this section, we experimentally compare different model personalization schemes using FedAlt and FedSim as well as no model personalization. Details about the experiments, hyperparameters and additional results are provided in the appendices. The code to reproduce the experimental results will be publicly released.
Datasets, Tasks and Models. We consider three learning tasks; they are summarized in Table 2.
(a) Next-Word Prediction: We use the StackOverflow dataset, where each device corresponds to the questions and answers of one user on stackoverflow.com. This task is representative of mobile keyboard predictions. We use a 4-layer transformer model (Vaswani et al., 2017).
(b) Visual Landmark Recognition: We use the GLDv2 dataset (Weyand et al., 2020; Hsu et al., 2020), a large-scale dataset with real images of global landmarks. Each device corresponds to a Wikipedia contributor who uploaded images. This task resembles a scenario where smartphone users capture images of landmarks while traveling. We use a ResNet-18 (He et al., 2016) model with group norm instead of batch norm (Hsieh et al., 2020) and images are reshaped to 224× 224.
(c) Character Recognition: We use the EMNIST dataset (Cohen et al., 2017), where the input is a 28 × 28 grayscale image of a handwritten character and the output is its label (0-9, a-z, A-Z). Each device corresponds to a writer of the character. We use a ResNet-18 model, with input and output layers modified to accommodate the smaller image size and number of classes.
All models are trained with the cross entropy loss and evaluated with top-1 accuracy of classification.
Model Partitioning for Partial Personalization. We consider three partitioning schemes.
(a) Input layer personalization: This architecture learns a personalized representation per-device by personalizing the input layer, while the rest of the model is shared (Figure 1a). For the transformer, we use the first transformer layer in place of the embedding layer.
(b) Output layer personalization: This architecture learns a shared representation but personalizes the prediction layer (Figure 1b). For the transformer model, we use the last transformer layer instead of the output layer.
(c) Adapter personalization: In this architecture, each device adds lightweight personalized adapter modules between specific layers of a shared model (Figure 2a). We use the transformer adapters of Houlsby et al. (2019) and for ResNet-18, the residual adapters of Rebuffi et al. (2017).
Algorithms and Experimental Pipeline. For full model personalization, we consider three baselines: (i) Finetune, where each device finetunes (using SGD locally) its personal full model starting from a learned common model, (ii) Ditto (Li et al., 2021), which is finetuning with `2 regularization, and, (iii) pFedMe (Dinh et al., 2020) which minimizes the objective (2). All methods, including FedSim, FedAlt and the baselines are initialized with a global model trained with FedAvg.
4.1 EXPERIMENTAL RESULTS
Partial personalization nearly matches full personalization and can sometimes outperform it. Table 3 shows the average test accuracy across all devices of different FL algorithms. We see that on the StackOverflow dataset, output layer personalization (25.05%) makes up nearly 90% of the gap between the non-personalized baseline (23.82%) and full personalization (25.21%). On EMNIST, adapter personalization exactly matches full personalization. Most surprisingly, on GLDv2, adapter personalization outperforms full personalization by 3.5pp (percentage points).
This success of adapter personalization can be explained partly by the nature of GLDv2. On average, the training data on each device contains 25 classes out of a possible 2028 while the testing data contains 10 classes not seen in its own training data. These unseen classes account for nearly 23% of all testing data. Personalizing the full model is susceptible to “forgetting” the original task (Kirkpatrick et al., 2017), making it harder to get these unseen classes right. Such catastrophic forgetting is worse when finetuning on a very small local dataset, as we often have in FL. On the other hand, personalizing the adapters does not suffer as much from this issue (Rebuffi et al., 2017).
Partial personalization only requires a fraction of the parameters to be personalized. Figure 3 shows that the number of personalized parameters required to compete with full model personalization is rather small. On StackOverflow, personalizing 1.2% of the parameters with adapters captures 72% of the accuracy boost from personalizing all 5.7M parameters; this can be improved to nearly 90% by personalizing 14% of the parameters (output layer). Likewise, we match full personalization on EMNIST and exceed it on GLDv2 with adapters, personalizing 11.5-12.5% of parameters.
The best personalized architecture is model and task dependent. Table 3 shows that personalizing the final transformer layer (denoted as “Output Layer”) achieves the best performance for StackOverflow, while the residual adapter achieves the best performance for GLDv2 and EMNIST. This shows that the approach of personalizing a fixed model part, as in several past works, is suboptimal. Our framework allows for the use of domain knowledge to determine customized personalization.
Finetuning is competitive with other full personalization methods. Full finetuning matches the performance of pFedMe and Ditto on StackOverflow and EMNIST. On GLDv2, however, pFedMe outperforms finetuning by 0.07pp, but is still 3.5pp worse than adapter personalization.
FedAlt outperforms FedSim for partial personalization. If the optimization problem (3) were convex, we would expect similar performance from FedAlt and FedSim. However, with nonconvex optimization problems such as the ones considered here, the choice of the optimization algorithm often affects the quality of the solution found. We see from Table 4 that FedAlt is almost always better than FedSim by a small margin, e.g., 0.08pp for StackOverflow/Adapter and 0.3pp for GLDv2/Input Layer. FedSim in turn yields a higher accuracy than simply finetuning the personalized part of the model, by a large margin, e.g., 0.12pp for StackOverflow/Output Layer and 2.55pp for GLDv2/Adapter.
4.2 EFFECTS OF PERSONALIZATION ON PER-DEVICE GENERALIZATION
Personalization hurts the test accuracy on some devices. Figure 4 shows the change in training and test accuracy of each device, compared with a non-personalized model trained by FedAvg. We see that personalization leads to an improvement in training accuracy across all devices, but a reduction in test accuracy on some of the devices over the non-personalized baseline. In particular, devices whose testing performance is hurt by personalization are mostly on the left side of the plot, meaning that they have relatively small number of training samples. On the other hand, many devices with the most improved test accuracy also appear on the left side, signaling the benefit of personalization. Therefore, there is a large variation of results for devices with few samples.
Additional experiments (see Appendix C) show that using `2 regularization, as in (2), or weight decay does not mitigate this issue. In particular, increasing regularization strength (less personalization) can reduce the spread of per-device accuracy, but only leads to a worse average accuracy that is close to using a common model. Other simple strategies such as dropout also do not fix this issue.
An ideal personalized method would boost performance on most of the devices without causing a reduction in (test) accuracy on any device. Realizing this goal calls for a sound statistical analysis for personalized FL and may require sophisticated methods for local performance diagnosis and more structured regularization. These are very promising directions for future research.
5 DISCUSSION
In addition to a much smaller memory footprint than full model personalization and being less susceptible to catastrophic forgetting, partial model personalization has other advantages. For example, it reduces the amount communication between the server and the devices because only the shared parameters are transmitted. While the communication saving may not be significant (especially when the personal parameters are only a small fraction of the full model), communicating only the shared parameters may have significant implications for privacy. Intuitively, it can be harder to infer private information from partial model information. This is especially the case if the more sensitive features of the data are processed through personal components of the model that are kept local at the devices. For example, we speculate that less noise needs to be added to the communicated parameters in order to satisfy differential privacy requirements (Abadi et al., 2016). This is a very promising direction for future research.
REPRODUCIBILITY STATEMENT
For theoretical results, we state and discuss the assumptions in Appendix A. The full proofs of all theoretical statements are also given there.
For our numerical results, we take multiple steps for reproducibility. First, we run each numerical experiment for five random seeds, and report both the mean and standard deviation over these runs. Second, we only use publicly available datasets and report the preprocessing at length in Appendix B. Third, we give the full list of hyperparameters used in our experiments in Table 8 in Appendix B. Finally, we will publicly release the code to reproduce the our experimental results.
ETHICS STATEMENT
The proposed framework for partial model personalization is immediately applicable for a range of practical federated learning applications in edge devices such as text prediction and speech recognition. One of key considerations of federated learning is privacy. Partial model personalization maintains the all the privacy benefits of current non-personalized federated learning systems. Indeed, our approach is compatible with techniques to enhance privacy such as differential privacy and secure aggregation. We also speculate that partial personalization has the potential for further reducing the privacy footprint — an investigation of this subject is beyond the scope of this work and is an interesting direction for future work.
On the flip side, we also observed in experiments that personalization (both full or partial) leads to a reduction in test performance on some of the devices. This has important implications for fairness, and calls for further research into the statistical aspects of personalization, performance diagnostics as well as more nuanced definitions of fairness in federated learning.
Appendix
Table of Contents
A Convergence Analysis: Full Proofs 14
A.1 Review of Setup and Assumptions . . . . . . . . . . . . . . . . . . . . . . . . 14 A.2 Convergence Analysis of FedSim . . . . . . . . . . . . . . . . . . . . . . . . . 15 A.3 Convergence Analysis of FedAlt . . . . . . . . . . . . . . . . . . . . . . . . . 22 A.4 Technical Lemmas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
B Experiments: Detailed Setup and Hyperparameters 31
B.1 Datasets, Tasks and Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 B.2 Experimental Pipeline and Baselines . . . . . . . . . . . . . . . . . . . . . . . 34 B.3 Hyperparameters and Evaluation Details . . . . . . . . . . . . . . . . . . . . . 35
C Experiments: Additional Results 36
C.1 Ablation: Final Finetuning for FedAlt and FedSim . . . . . . . . . . . . . . . . 36 C.2 Effect of Personalization on Per-Device Generalization . . . . . . . . . . . . . . 37 C.3 Partial Personalization for Stateless Devices . . . . . . . . . . . . . . . . . . . 38
A CONVERGENCE ANALYSIS: FULL PROOFS
We give the full convergence proofs here. The outline of this section is:
• §A.1: Review of setup and assumptions; • §A.2: Convergence analysis of FedSim and the full proof of Theorem 1; • §A.3: Convergence analysis of FedAlt and the full proof of Theorem 2; • §A.4: Technical lemmas used in the analysis.
A.1 REVIEW OF SETUP AND ASSUMPTIONS
We consider a federated learning system with n devices. Let the loss function on device i be Fi(u, vi), where u ∈ Rd0 denotes the shared parameters across all devices and vi ∈ Rdi denotes the personal parameters at device i. We aim to minimize the function
F (u, V ) := 1
n n∑ i=1 Fi(u, vi) , (7)
where V = (v1, · · · , vn) is a concatenation of all the personalized parameters. This is a special case of (3) with the equal per-device weights, i.e., αi = 1/n. Recall that we assume that F is bounded from below by F ?.
For convenience, we reiterate Assumptions 1, 2 and 3 from the main-paper as Assumptions 1′, 2′ and 3′ below respectively, with some additional comments and discussion. Assumption 1′ (Smoothness). For each device i = 1, . . . , n, the objective Fi is smooth, i.e., it is continuously differentiable and,
(a) u 7→ ∇uFi(u, vi) is Lu-Lipschitz for all vi, (b) vi 7→ ∇vFi(u, vi) is Lv-Lipschitz for all u, (c) vi 7→ ∇uFi(u, vi) is Luv-Lipschitz for all u, and, (d) u 7→ ∇vFi(u, vi) is Lvu-Lipschitz for all vi. Further, we assume for some χ > 0 that
max{Luv, Lvu} ≤ χ √ LuLv .
The smoothness assumption is a standard one. We can assume without loss of generality that the cross-Lipschitz coefficients Luv, Lvu are equal. Indeed, if Fi is twice continuously differentiable, we can show that Luv, Lvu are both equal to the operator norm ‖∇2uvFi(u, vi)‖op of the mixed second derivative matrix. Further, χ denotes the extent to which u impacts the gradient of vi and vice-versa.
Our next assumption is about the variance of the stochastic gradients, and is standard in literature. Compared to the main paper, we adopt a more precise notation about stochastic gradients. Assumption 2′ (Bounded Variance). Let Di denote a probability distribution over the data space Z on device i. There exist functions Gi,u and Gi,v which are unbiased estimates of ∇uFi and ∇vFi respectively. That is, for all u, vi:
Ez∼Di [Gi,u(u, v, z)] = ∇uFi(u, vi), and Ez∼Di [Gi,v(u, v, z)] = ∇vFi(u, vi) .
Furthermore, the variance of these estimators is at most σ2u and σ 2 v respectively. That is,
Ez∼Di‖Gi,u(u, v, z)−∇uFi(u, vi)‖ 2 ≤ σ2u ,
Ez∼Di‖Gi,v(u, v, z)−∇vFi(u, vi)‖ 2 ≤ σ2v .
In practice, one usually has Gi,u(u, vi, z) = ∇ufi((u, vi), z), which is the gradient of the loss on datapoint z ∼ Di under the model (u, vi), and similarly for Gi,v . Finally, we make a gradient diversity assumption. Assumption 3′ (Partial Gradient Diversity). There exist δ ≥ 0 and ρ ≥ 0 such that for all u and V ,
1
n n∑ i=1 ‖∇uFi(u, vi)−∇uF (u, V )‖2 ≤ δ2 + ρ2‖∇uF (u, V )‖2 . (8)
Algorithm 4 FedSim: Simultaneous update of shared and personal parameters
Input: Initial iterates u(0), V (0), Number of communication rounds T , Number of devices per round m, Number of local updates τ , Local step sizes γu, γv .
1: for t = 0, 1, · · · , T − 1 do 2: Sample m devices from [n] without replacement in S(t) 3: for each selected device i ∈ S(t) in parallel do 4: Initialize v(t)i,0 = v (t) i and u (t) i,0 = u (t) 5: for k = 0, · · · , τ − 1 do . Update all parameters jointly 6: Sample data z(t)i,k ∼ Di 7: v(t)i,k+1 = v (t) i,k − γvGi,v(u (t) i,k, v (t) i,k, z (t) i,k) 8: u(t)i,k+1 = u (t) i,k − γuGi,u(u (t) i,k, v (t) i,k, z (t) i,k) 9: Update v(t+1)i = v (t) i,τ and u (t+1) i = u (t) i,τ
10: Update u(t+1) = ∑ i∈S(t) αiu (t+1) i∑
i∈S(t) αi at the server with secure aggregation
11: return u(T ), v(T )1 , · · · , v (T ) n
This assumption is analogous to the bounded variance assumption (Assumption 2′), but with the stochasticity coming from the sampling of devices. It characterizes how much local steps on one device help or hurt convergence globally. Similar gradient diversity assumptions are often used for analyzing non-personalized federated learning (Koloskova et al., 2020; Karimireddy et al., 2020). Finally, it suffices for the partial gradient diversity assumption to only hold at the iterates (u(t), V (t)) generated by either FedSim or FedAlt.
A.2 CONVERGENCE ANALYSIS OF FEDSIM
We give the full form of FedSim in Algorithm 4 for the general case of unequal αi’s but focus on αi = 1/n for the analysis. In order to simplify presentation, we denote V (t) = (v (t) 1 , . . . , v (t) n ) and define the following shorthand for gradient terms
∆(t)u = ∥∥∥∇uF (u(t), V (t))∥∥∥2 , and ∆(t)v = 1n n∑ i=1 ∥∥∥∇vFi (u(t), v(t)i )∥∥∥2 . For convenience, we restate Theorem 1 from the main paper. Theorem 1 (Convergence of FedSim). Suppose Assumptions 1, 2 and 3 hold and the learning rates in FedSim are chosen as γu = η/(Luτ) and γv = η/(Lvτ) with
η ≤ min {
1 12(1+χ2)(1+ρ2) ,
√ m/n
196(1−τ−1)(1+χ2)(1+ρ2)
} .
Then, ignoring absolute constants, we have
1 T ∑T−1 t=0 ( 1 Lu E [ ∆ (t) u ] + mnLv E [ ∆ (t) v ]) ≤ ∆F0ηT + η(1 + χ 2) ( σ2u+δ 2(1−mn ) mLu + mσ2v nLv ) + η2(1− τ−1)(1 + χ2) ( σ2u+δ 2
Lu + σ2v Lv
) . (6)
Before proving the theorem, we give the following corollary with optimized learning rates. Corollary 3. Consider the setting of Theorem 1 and let ε > 0 be given. Suppose we set the learning rates γu = η/(τLu) and γv = η/(τLv), where (ignoring absolute constants),
η = ε(
δ2 Lu ( 1− mn ) + σ2u Lu + σ2vm Lun ) (1 + χ2)
∧ ε( δ2
Lu ∨ σ 2 u Lu ∨ σ 2 v Lv
) (1− τ−1)(1 + χ2)
1/2
∧ 1 (1 + χ2)(1 + ρ2) ∧( m/n (1− τ−1)(1 + ρ2)(1 + χ2) )1/2 .
We have,
1
T T−1∑ t=0
( 1 Lu E ∥∥∥∇uF (u(t), V (t))∥∥∥2 + m Lvn2 n∑ i=1 E ∥∥∥∇vFi (u(t), v(t)i )∥∥∥2 ) ≤ ε
after T communication rounds, where, ignoring absolute constants,
T ≤ ∆F0(1 + χ 2)
ε2
( σ2u + δ 2 ( 1− mn ) mLu + mσ2v nLv )
+ ∆F0
√ (1− τ−1)(1 + χ2)
ε3/2
( σu + δ√ Lu + σv√ Lv ) +
∆F0 ε (1 + χ2)(1 + ρ2)
( 1 + √ (1− τ−1)n
m
) .
Proof. The choice of the constant η ensures that each of the constant terms in the bound of Theorem 1 is O(ε). The final rate is now O ( ∆F0/(ηε) ) ; plugging in the value of η completes the proof.
We now prove Theorem 1.
Proof of Theorem 1. The proof mainly applies the smoothness upper bound to write out a descent condition with suitably small noise terms. We start with some notation.
Notation. Let F (t) denote the σ-algebra generated by ( u(t), V (t) ) and denote Et[·] = E[·|F (t)]. For all devices, including those not selected in each round, we define virtual sequences ũ(t)i,k, ṽ (t) i,k as the SGD updates in Algorithm 4 for all devices regardless of whether they are selected. For the selected devices k ∈ S(t), we have ( u
(t) i,k, v (t) i,k
) = ( ũ
(t) i,k, ṽ (t) i,k ) . Note now that the random variables ũ(t)i,k, ṽ (t) i,k
are independent of the device selection S(t). The updates for the devices i ∈ S(t) are given by
v (t+1) i = v (t) i − γv τ−1∑ k=0 Gi,v ( ũ (t) i,k, ṽ (t) i,k, z (t) i,k ) ,
and the server update is given by
u(t+1) = u(t) − γu m ∑ i∈S(t) τ−1∑ k=0 Gi,u ( ũ (t) i,k, ṽ (t) i,k, z (t) i,k ) . (9)
Proof Outline. We use the smoothness of Fi, more precisely Lemma 16, to obtain F ( u(t+1), V (t+1) ) − F ( u(t), V (t) ) ≤ 〈 ∇uF (u(t), V (t)), u(t+1) − u(t)
〉 ︸ ︷︷ ︸
T1,u
+ 1
n n∑ i=1 〈 ∇vFi(u(t), v(t)i ), v (t+1) i − v (t) i 〉 ︸ ︷︷ ︸
T1,v
+ Lu(1 + χ
2)
2 ∥∥∥u(t+1) − u(t)∥∥∥2︸ ︷︷ ︸ T2,u + 1 n n∑ i=1 Lv(1 + χ 2) 2 ∥∥∥v(t+1)i − v(t)i ∥∥∥2︸ ︷︷ ︸ T2,v .
(10)
Our goal will be to bound each of these terms to get a descent condition from each step of the form
Et
[ F ( u(t+1), V (t+1) ) − F ( u(t), V (t) )] ≤ −γuτ
8 ∥∥∥∇uF (u(t), V (t))∥∥∥2 − γvτm 8n2 n∑ i=1 ∥∥∥∇vFi(u(t), v(t)i )∥∥∥2 +O(γ2u + γ2v) ,
where the O(γ2u + γ 2 v) terms are controlled using the bounded variance and gradient diversity assumptions. Telescoping this descent condition gives the final bound.
Main Proof. Towards this end, we prove non-asymptotic bounds on each of the terms T1,v, T1,u, T2,v and T2,u, in Claims 4 to 7 respectively. We then invoke them to get the bound
Et
[ F ( u(t+1), V (t+1) ) − F ( u(t), V (t) )] ≤ −γuτ
4 ∆(t)u −
γvτm
4n ∆(t)v
+ Lu(1 + χ
2)γ2uτ 2
2
( σ2u + 12δ2
m (1−m/n)
) + Lv(1 + χ 2)γ2vτ 2σ2vm
2n
+ 2
n n∑ i=1 τ−1∑ k=0 Et ∥∥∥u(t)i,k − u(t)∥∥∥2 (L2uγu + mn χ2LuLvγv) + 2
n n∑ i=1 τ−1∑ k=0 Et ∥∥∥v(t)i,k − v(t)∥∥∥2 (mn L2vγv + χ2LuLvγu) . (11)
Note that we simplified some constants appearing on the gradient norm terms using
γu ≤ ( 12Lu(1 + χ 2)(1 + ρ2)τ )−1 and γv ≤ ( 6Lv(1 + χ 2)τ )−1 .
Our next step is to bound the last two lines of (11) with Lemma 8 and invoke the gradient diversity assumption (Assumption 3′) as
1
n n∑ i=1 ∥∥∥∇uFi(u(t), v(t)i )∥∥∥2 ≤ δ2 + (1 + ρ2)∥∥∥∇uF (u(t), V (t))∥∥∥2 . This gives, after plugging in the learning rates and further simplifying the constants,
Et
[ F ( u(t+1), V (t+1) ) − F ( u(t), V (t) )] ≤− c∆ (t) u
8Lu − cm∆
(t) v
8Lvn + c2(1 + χ2)
( σ2u
2Lu + mσ2v nLv + 6δ2 Lum
( 1− m
n )) + c3(1 + χ2)(1− τ−1) ( 24δ2
Lu + 4σ2u Lu + 4σ2v Lu
) .
Taking full expectation, telescoping the series over t = 0, · · · , T − 1 and rearranging the resulting terms give the desired bound in Theorem 1.
Claim 4 (Bounding T1,v). Let T1,v be defined as in (10). We have,
Et[T1,v] ≤ − γvτm
2n2 n∑ i=1 ∥∥∥∇vFi (u(t), v(t)i )∥∥∥2 + γvm
n n∑ i=1 τ−1∑ k=0 Et [ χ2LuLv ∥∥∥ũ(t)i,k − u(t)∥∥∥2 + L2v∥∥∥ṽ(t)i,k − v(t)i ∥∥∥2] .
Proof. Define T1,v,i to be contribution of the ith term to T1,v. For i /∈ St, we have that T1,v,i = 0, since v(t+1)i = v (t) i . On the other hand, for i ∈ S(t), we use the unbiasedness of the gradient estimator
Gi,v and the independence of z (t) i,k from u (t) i,k, v (t) i,k to get
Et [T1,v,i] = −γv τ−1∑ k=0 Et 〈 ∇vFi ( u(t), v (t) i ) ,∇vFi ( u (t) i,k, v (t) i,k )〉 = −γv
τ−1∑ k=0 Et 〈 ∇vFi ( u(t), v (t) i ) ,∇vFi ( ũ (t) i,k, ṽ (t) i,k )〉 =− γvτ
∥∥∥∇vFi (u(t), v(t)i )∥∥∥2 − γv
τ−1∑ k=0 Et 〈 ∇vFi ( u(t), v (t) i ) ,∇vFi ( ũ (t) i,k, ṽ (t) i,k ) −∇vFi ( u(t), v (t) i )〉 ≤ −γvτ
2 ∥∥∥∇vFi (u(t), v(t)i )∥∥∥2 + γv2 τ−1∑ k=0 Et ∥∥∥∇vFi (ũ(t)i,k, ṽ(t)i,k)−∇vFi (u(t), v(t)i )∥∥∥2 . (12) For the second term, we add and subtract∇vFi ( u(t), ṽ (t) i,k ) and use smoothness to get
∥∥∥∇vFi (ũ(t)i,k, ṽ(t)i,k)−∇vFi (u(t), v(t)i )∥∥∥2 ≤ 2χ2LuLv∥∥∥ũ(t)i,k − u(t)∥∥∥2 + 2L2v∥∥∥ṽ(t)i,k − v(t)i ∥∥∥2 . (13)
Since the right hand side of this bound is independent of St, we get,
Et[T1,v] = m
n Et 1 m ∑ i∈S(t) T1,v,i = m n2 n∑ i=1 Et[T1,v,i] ,
and plugging in (12) and (13) completes the proof.
Claim 5 (Bounding T1,u). Consider T1,u defined in (10). We have the bound,
Et[T1,u] ≤ − γuτ
2 ∥∥∥∇uF (u(t), V (t))∥∥∥2 + γu n n∑ i=1 τ−1∑ k=0 Et [ L2u ∥∥∥ũ(t)i,k − u(t)∥∥∥2 + χ2LuLv∥∥∥ṽ(t)i,k − v(t)i ∥∥∥2] .
Proof. Due to the independence of S(t) from ũ(t)i,k, ṽ (t) i,k, we have,
Et
[ u(t+1) − u(t) ] = −γuEt 1 m ∑ i∈S(t) τ−1∑ k=0 ∇uFi ( u (t) i,k, v (t) i,k ) = −γuEt 1 m ∑ i∈S(t) τ−1∑ k=0 ∇uFi ( ũ (t) i,k, ṽ (t) i,k
) = −γu
n n∑ i=1 τ−1∑ k=0 Et [ ∇uFi ( ũ (t) i,k, ṽ (t) i,k )] ,
where the last equality took an expectation over S(t), which is independent of ũ(t)i,k, ṽ (t) i,k. Now, using the same sequence of arguments as Claim 4, we have,
Et 〈 ∇uF ( u(t), V (t) ) , u(t+1) − u(t) 〉 = −γu
τ−1∑ k=0 Et
〈 ∇uF ( u(t), V (t) ) , 1
n n∑ i=1 ∇uFi ( ũ (t) i,k, ṽ (t) i,k
)〉
≤ −γuτ 2 ∥∥∥∇uF (u(t), V (t))∥∥∥2 + γu 2 τ−1∑ k=0 Et ∥∥∥∥∥ 1n n∑ i=1 ∇uFi ( ũ (t) i,k, ṽ (t) i,k ) −∇uF ( u(t), V (t) )∥∥∥∥∥ 2
(∗) ≤ −γuτ
2 ∥∥∥∇uF (u(t), V (t))∥∥∥2 + γu 2n n∑ i=1 τ−1∑ k=0 Et ∥∥∥∇uFi (ũ(t)i,k, ṽ(t)i,k)−∇uFi (u(t), v(t)i )∥∥∥2 ≤ −γuτ
2 ∥∥∥∇uF (u(t), V (t))∥∥∥2 + γu n n∑ i=1 τ−1∑ k=0 Et [ L2u ∥∥∥ũ(t)i,k − u(t)∥∥∥2 + L2uv∥∥∥ṽ(t)i,k − v(t)i ∥∥∥2] , where the inequality (∗) follows from Jensen’s inequality as∥∥∥∥∥ 1n n∑ i=1 ∇uFi ( ũ (t) i,k, ṽ (t) i,k ) −∇uF ( u(t), V (t) )∥∥∥∥∥ 2 ≤ 1 n n∑ i=1 ∥∥∥∇uFi (ũ(t)i,k, ṽ(t)i,k)−∇uFi (u(t)i,k, v(t))∥∥∥2 .
Claim 6 (Bounding T2,v). Consider T2,v as defined in (10). We have the bound,
Et[T2,v] ≤ 3Lv(1 + χ
2)γ2vτ 2m
2n2
n∑ i=1 ∥∥∥∇vFi (u(t), v(t)i )∥∥∥2 + Lv(1 + χ2)γ2vτ2mσ2v2n + 3Lv(1 + χ 2)γ2vτm
2n2
n∑ i=1 τ−1∑ k=0 Et [ L2v ∥∥∥ṽ(t)i,k − v(t)i ∥∥∥2 + χ2LuLv∥∥∥ũ(t)i,k − u(t)∥∥∥2] . Proof. We start with
Et ∥∥∥ṽ(t)k,τ − v(t)∥∥∥2 = γ2vEt ∥∥∥∥∥ τ−1∑ k=0 Gi,v ( ũ (t) i,k, ṽ (t) i,k, z (t) i,k )∥∥∥∥∥ 2
≤ γ2vτ τ−1∑ k=0 Et ∥∥∥Gi,v (ũ(t)i,k, ṽ(t)i,k, z(t)i,k)∥∥∥2 ≤ γ2vτ2σ2v + γ2vτ
τ−1∑ k=0 Et ∥∥∥∇vFi (ũ(t)i,k, ṽ(t)i,k)∥∥∥2 ≤ γ2vτ2σ2v + 3γ2vτ2
∥∥∥∇vFi (u(t), v(t)i )∥∥∥2 + 3γ2vτ
τ−1∑ k=0 Et [ L2v ∥∥∥ṽ(t)i,k − v(t)i ∥∥∥2 + χ2LuLv∥∥∥ũ(t)i,k − u(t)∥∥∥2] . Using (a) v(t+1)i = ṽ (t) i,τ for i ∈ S(t), and, (b) S(t) is independent from ũ (t) i,k, ṽ (t) i,k, we get,
Et[T2,v] = Lv(1 + χ
2)m
2n Et 1 m ∑ i∈S(t) ∥∥∥ṽ(t)i,τ − v(t)i ∥∥∥2
≤ Lv(1 + χ 2)m
2n2
n∑ i=1 Et ∥∥∥ṽ(t)i,τ − v(t)i ∥∥∥2 Plugging in the bound Et ∥∥∥ṽ(t)i,τ − v(t)∥∥∥2 completes the proof.
Claim 7 (Bounding T2,u). Consider T2,u as defined in (10). We have,
Et[T2,u] ≤ Lu(1 + χ
2)γ2uτ 2
2m
( σ2u + 12δ 2 (
1− m n )) + 3Lu(1 + χ 2)γ2uτ 2(1 + ρ2)
∥∥∥∇uFi (u(t), V (t))∥∥∥2 + 3Lu(1 + χ 2)γ2uτ
2n
n∑ i=1 τ−1∑ k=0 Et [ L2u ∥∥∥ũ(t)i,k − u(t)∥∥∥2 + χ2LuLv∥∥∥ṽ(t)i,k − v(t)i ∥∥∥2] . Proof. We proceed with the first two inequalities as in the proof of Claim 6 to get
Et ∥∥∥u(t+1) − u(t)∥∥∥2 ≤ γ2uτ2σ2u m + γ2uτ τ−1∑ k=0 Et ∥∥∥∥∥∥ 1m ∑ i∈S(t) ∇uFi ( ũ (t) i,k, ṽ (t) i,k )∥∥∥∥∥∥ 2
︸ ︷︷ ︸ =:T3,j
.
For T3,j , (a) we add and subtract ∇uF (u(t), V (t)) and ∇uFi(u(t), ṽ(t)i,k), (b) invoke the squared triangle inequality, and, (c) use smoothness to get
T3,j = 6Et ∥∥∥∥∥∥ 1m ∑ i∈S(t) ∇uFi ( u(t), v (t) i ) −∇uF ( u(t), V (t) )∥∥∥∥∥∥ 2 + 6 ∥∥∥∇uF (u(t), V (t))∥∥∥2
+ 3Et 1 m ∑ i∈S(t) ( L2u ∥∥∥ũ(t)i,k − u(t)∥∥∥2 + χ2LuLv∥∥∥ṽ(t)i,k − v(t)i ∥∥∥2)
For the first term, we use the fact that S(t) is obtained by sampling without replacement to apply Lemma 17 together with the gradient diversity assumption to get
Et ∥∥∥∥∥∥ 1m ∑ i∈S(t) ∇uFi ( u(t), v (t) i ) −∇uF ( u(t), V (t) )∥∥∥∥∥∥ 2
≤ 1 m ( n−m n− 1 ) 1 n n∑ i=1 ∥∥∥∇uFi (u(t), v(t)i )−∇uF (u(t), V (t))∥∥∥2 ≤ 1 m ( n−m n− 1 )( δ2 + ρ2
∥∥∥∇uF (u(t), V (t))∥∥∥2) . Therefore,
T3,j = 12δ2
m
( 1− m
n
) + 6(1 + ρ2) ∥∥∥∇uF (u(t), V (t))∥∥∥2 + 3
n n∑ i=1 Et [ L2u ∥∥∥ũ(t)i,k − u(t)∥∥∥2 + χ2LuLv∥∥∥ṽ(t)i,k − v(t)i ∥∥∥2] , where we also used the independence between S(t) and (ũ(t)i,k, ṽ (t) i,k). Plugging this into the expression for Et‖u(t+1) − u(t)‖2 completes the proof.
Lemma 8. Let Fi satisfy Assumptions 1′-3′, and consider the iterates
uk+1 = uk − γuGi,u(uk, vk, zk) , and, vk+1 = vk − γvGi,v(uk, vk, zk) ,
for k = 0, · · · , τ − 1, where zk ∼ Di. Suppose the learning rates satisfy γu = cu/(τLu) and γv = cv/(τLv) with cu, cv ≤ 1/ √ 6 max{1, χ−2}. Further, define,
A = γuL 2 u + fχ 2γvLuLv , and, B = fγvL2v + χ 2γuLuLv ,
where f ∈ (0, 1] is given. Then, we have the bound,
τ−1∑ k=0 E [ A‖uk − u0‖2+B‖vk − u0‖2 ] ≤ 4τ2(τ − 1) ( γ2uσ 2 uA+ γ 2 vσ 2 vB )
+ 12τ2(τ − 1) ( γ2uA‖∇uFi(u0, v0)‖2 + γ2vB‖∇vFi(u0, v0)‖2 ) .
Proof. If τ = 1, there is nothing to prove, so we assume τ > 1. Let ∆k := A‖uk − u0‖2 +B‖vk − v0‖2 and denote by Fk the sigma-algebra generated by (wk, vk). Further, let Ek[·] = E[·|Fk]. We use the inequality 2αβ ≤ α2/δ2 + δ2β2 for reals α, β, δ to get,
Ek‖uk+1 − u0‖2 ≤ ( 1 + 1
τ − 1
) ‖uk − u0‖2 + τγ2uEk‖Gi,u(uk, vk, zk)‖ 2
≤ ( 1 + 1
τ − 1
) ‖uk − u0‖2 + τγ2uσ2u + τγ2u‖∇uFi(uk, vk)‖ 2
≤ ( 1 + 1
τ − 1
) ‖uk − u0‖2 + τγ2uσ2u + 3τγ2u‖∇uFi(u0, v0)‖ 2
+ 3τγ2uL 2 u‖uk − u0‖2 + 3τγ2uLuv‖vk − v0‖2 ,
where the last inequality followed from the squared triangle inequality (from adding and subtracting ∇uFi(u0, vk) and∇uFi(u0, v0)) followed by smoothness. Together with the analogous inequality for the v-update, we get,
Ek[∆k+1] ≤ ( 1 + 1
τ − 1
) ∆k +A ′‖uk − u0‖2 +B′‖vk − v0‖2 + C ,
where we have
A′ = 3τ(γ2uL 2 uA+ γ 2 vχ 2LuLvB), and, B′ = 3τ(γ2vL 2 vB + γ 2 uχ 2LuLvA) and,
C ′ = τγ2uσ 2 uA+ τγ 2 vσ 2 vB + 3τγ 2 uA‖∇uFi(u0, v0)‖2 + 3τγ2vB‖∇vFi(u0, v0)‖2 .
Next, we apply Lemma 20 to get that A′ ≤ A/τ and B′ ≤ B/τ under the assumed conditions on the learning rates; this allows us to write the right hand side completely in terms of ∆k and unroll the recurrence. The intuition behind Lemma 20 is as follows. Ignoring the dependence on τ, Lu, Lv, χ for a moment, if γu and γv are both O(η), then A′, B′ are both O(η3), while A and B are O(η). Thus, making η small enough should suffice to get A′ ≤ O(A) and B′ ≤ O(B). Concretely, Lemma 20 gives
Ek[∆k+1] ≤ ( 1 + 2
τ − 1
) E[∆k] + C ,
and unrolling this recurrence gives for k ≤ τ − 1
E[∆k] ≤ k−1∑ j=0 ( 1 + 2 τ − 1 )j C ≤ τ − 1 2 ( 1 + 2 τ − 1 )k C
≤ τ − 1 2
( 1 + 2
τ − 1
)τ−1 C ≤ e 2
2 (τ − 1)C ,
where we used (1 + 1/α)α ≤ e for all α > 0. Summing over k and using the numerical bound e2 < 8 completes the proof.
Remark 9. We only invoked the partial gradient diversity assumption (Assumption 3) at iterates (u(t), V (t)); therefore, it suffices if the assumption only holds at iterates (u(t), V (t)) generated by FedSim, rather than at all (u, V ).
Algorithm 5 FedAlt: Alternating updates of shared and personalized parameters
Input: Initial iterates u(0), V (0), Number of communication rounds T , Number of devices per round m, Number of local updates τu, τv , Local step sizes γu, γv ,
1: for t = 0, 1, · · · , T − 1 do 2: Sample m devices from [n] without replacement in S(t) 3: for each selected device i ∈ S(t) in parallel do 4: Initialize v(t)i,0 = v (t) i 5: for k = 0, · · · , τv − 1 do . Update personalized parameters 6: Sample data z(t)i,k ∼ Di 7: v(t)i,k+1 = v (t) i,k − γvGi,v(u(t), v (t) i,k, z (t) i,k) 8: Update v(t+1)i = v (t) k,τv 9: Initialize u(t)i,0 = u (t)
10: for k = 0, · · · , τu − 1 do . Update shared parameters 11: u(t)i,k+1 = u (t) i,k − γuGi,u(u (t) i,k, v (t+1) i , z (t) i,k) 12: Update u(t+1)i = u (t) i,τu
13: Update u(t+1) = ∑ i∈S(t) αiu (t+1) i∑
i∈S(t) αi at the server with secure aggregation
14: return u(T ), v(T )1 , · · · , v (T ) n
A.3 CONVERGENCE ANALYSIS OF FEDALT
We give the full form of FedAlt in Algorithms 5 for the general case of unequal αi’s but focus on αi = 1/n for the analysis. For convenience, we reiterate Theorem 2 below. Recall the definitions
∆(t)u = ∥∥∥∇uF (u(t), V (t+1))∥∥∥2 , and, ∆(t)v = 1n n∑ i=1 ∥∥∥∇vFi (u(t), v(t)i )∥∥∥2 . Theorem 2 (Convergence of FedAlt). Suppose Assumptions 1, 2 and 3 hold and the learning rates in FedAlt are chosen as γu = η/(Luτu) and γv = η/(Lvτv), with
η ≤ min {
1 24(1+ρ2) , m 128χ2(n−m) , √ m χ2n } .
Then, ignoring absolute constants, we have
1 T ∑T−1 t=0 ( 1 Lu E [ ∆ (t) u ] + mnLv E [ ∆ (t) v ]) ≤ ∆F0
ηT + η
( σ2u+δ
2(1−mn ) mLu + σ2v Lv m+χ2(n−m) n ) + η2 ( σ2u+δ 2
Lu (1− τ−1u ) + σ2vm Lvn (1− τ−1v ) + χ2σ2v Lv
) .
Before proving the theorem, we have the corollary with optimized learning rates.
Corollary 10. Consider the setting of Theorem 2 and fix some ε > 0. Suppose we set γu = η/(τLu) and γv = η/(τLv) such that, ignoring absolute constants,
η = ( σ2v εLv (m n + χ2(1−m/n) ))−1∧(σ2u + δ2(1−m/n) mLuε )−1∧(σ2u + δ2 Luε (1− τ−1u ) )−1/2
∧( σ2vm Lvnε (1− τ−1v ) )−1/2∧ 1 1 + ρ2 ∧ m χ2(n−m) ∧√ m χ2n .
Then, we have,
1
T T−1∑ t=0
( 1 Lu E ∥∥∥∇uF (u(t), Ṽ (t))∥∥∥2 + m Lvn2 n∑ i=1 E ∥∥∥∇vFi (u(t), v(t)i )∥∥∥2 ) ≤ ε
after T communication rounds, where, ignoring absolute constants,
T ≤ ∆F0 ε2
( σ2u + δ 2 ( 1− mn ) mLu + σ2v Lv (m n + χ2 ( 1− m n )))
+ ∆F0 ε3/2 ( σu + δ√ Lu √ 1− τ−1u + σv√ Lv √ 1− τ−1v ) +
∆F0 ε
( 1 + ρ2 + χ2 ( n m − 1 ) + √ χ2n m ) .
Proof. We get the bound by balancing terms from the bound of Theorem 2. The choice of η ensures that all the O(η) and O(η2) terms are at most O(ε). Finally, the smallest number of communication rounds to make the left hand side of the bound of Theorem 2 smaller than ε is ∆F0/(ηε).
We are now ready to prove Theorem 2.
Proof of Theorem 2. The proof mainly applies the smoothness upper bound to write out a descent condition with suitably small noise terms. We start with some notation.
We introduce the notation ∆̃(t)u as the analogue of ∆ (t) u with the virtual variable Ṽ (t+1):
∆̃(t)u = ∥∥∥∇uF (u(t), V (t))∥∥∥2 .
Notation. Let F (t) denote the σ-algebra generated by ( u(t), V (t) ) and denote Et[·] = E[·|F (t)]. For all devices, including those not selected in each round, we define virtual sequences ũ(t)i,k, ṽ (t) i,k as the SGD updates in Algorithm 5 for all devices regardless of whether they are selected. For the selected devices i ∈ S(t), we have v(t)i,k = ṽ (t) i,k and u (t) i,k = ũ (t) i,k. Note now that the random variables ũ (t) i,k, ṽ (t) i,k are independent of the device selection S(t). Finally, we have that the updates for the selected devices i ∈ S(t) are given by
v (t+1) i = v (t) i − γv τv−1∑ k=0 Gi,v ( u(t), ṽ (t) i,k, z (t) i,k ) ,
and the server update is given by
u(t+1) = u(t) − γu m ∑ i∈S(t) τu−1∑ k=0 Gi,u ( ũ (t) i,k, ṽ (t) i,τv , z (t) i,k ) .
Proof Outline and the Challenge of Dependent Random Variables. We start with F ( u(t+1), V (t+1) ) − F ( u(t), V (t) ) =F ( u(t), V (t+1) ) − F ( u(t), V (t) ) + F ( u(t+1), V (t+1) ) − F ( u(t), V (t+1) ) .
(14)
The first line corresponds to the effect of the v-step and the second line to the u-step. The former is easy to handle with standard techniques that rely on the smoothness of F ( u(t), · ) . The latter is more challenging. In particular, the smoothness bound for the u-step gives us
F ( u(t+1), V (t+1) ) − F ( u(t), V (t+1) ) ≤ 〈 ∇uF ( u(t), V (t+1) ) , u(t+1) − u(t) 〉 + Lu 2
∥∥∥u(t+1) − u(t)∥∥∥2 . The standard proofs of convergence of stochastic gradient methods rely on the fact that we can take an expectation w.r.t. the sampling S(t) of devices for the first order term. However, both V (t+1) and
u(t+1) depend on the sampling S(t) of devices. Therefore, we cannot directly take an expectation with respect to the sampling of devices in S(t).
Virtual Full Participation to Circumvent Dependent Random Variables. The crux of the proof lies in replacing V (t+1) in the analysis of the u-step with the virtual iterate Ṽ (t+1) so as to move all the dependence of the u-step on S(t) to the u(t+1) term. This allows us to take an expectation; it remains to carefully bound the resulting error terms.
Finally, we will arrive at a bound of | 1. What is the focus of the paper regarding personalization in federated learning?
2. What are the strengths and weaknesses of the proposed algorithms, particularly FedAlt, compared to prior works like Singhal et al. 2021?
3. How does the paper contribute to the theoretical understanding of FedAlt's performance, especially in comparison to FedSim?
4. What are the potential savings of memory footprint when using partial model training, and how does the paper address this aspect?
5. Are there any clarifications or additional discussions that the reviewer suggests regarding details of the paper, such as initialization, baselines, and dataset splitting? | Summary Of The Paper
Review | Summary Of The Paper
This paper studies personalization in federated setting, i.e., instead of collaboratively training a global model, personalizing a model for each client. This paper proposed to personalize only part of the model parameters instead of the full model, and studied two algorithms FedSim and FedAlt. In local client updates, FedSim will simultaneously train the shared and personalized parameters, while FedAlt will train the personalized model first, then train the shared parameter.
Review
This is a well written paper and easy to follow. However, I have the following concerns.
An important related work is missing, and I strongly suggest the authors discuss
[Singhal et al. 2021 Federated Reconstruction: Partially Local Federated Learning https://arxiv.org/abs/2102.03448]. If I understand correctly, the proposed FedAlt algorithm is very similar to the paper, except that FedAlt requires a local state v_i.
Clarification on contributions If I understand correctly, FedSim has been used in [Liang et al. 2019, Arivazhagan et al. 2019, Collins et al. 2021, Li et al. 2021 ], FedAlt is relatively new but closely related to [Singhal et al. 2021]. This paper makes the following contributions
Convergence guarantees of FedSim and FedAlt: standard convergence rate under reasonable assumptions. Li et al. 2021 presented a convergence rate of FedSim, could the authors compare? Regarding the novelty of FedAlt proof, I believe virtual proxy variable is a common technique in federated optimization, (e.g., eq(13) in [Wang et al. 2021 A Field Guide to Federated Optimization https://arxiv.org/pdf/2107.06917.pdf]), and I would appreciate it if the authors elaborate more on the novelty.
FedAlt algorithm, which is a client state variant of Singhal et al. 2021, and achieves marginal improvement over FedSim [Liang et al. 2019, Arivazhagan et al. 2019, Collins et al. 2021, Li et al. 2021] (always < 0.5%.)
Theoretical insights on when FedAlt is better than FedSim. I found this to be interesting, and I would encourage the authors to provide more discussion and connections to empirical results.
Savings of memory footprint
It would be good to show either analytically or empirically how much memory the partial model training can reduce compared to full model training as this is the main motivation.
Clarification on some details.
Authors mentioned “All methods, including FedSim, FedAlt and the baselines are initialized with a global model trained with FedAvg. ”: how is the global model trained? And is it trained on the same set of clients as personalization?
For table 3, does the full model in baselines include adapter parameters?
How do the authors split the dataset for training, personalization and testing? |
ICLR | Title
Federated Learning with Partial Model Personalization
Abstract
We propose and analyze a general framework of federated learning with partial model personalization. Compared with full model personalization, partial model personalization relies on domain knowledge to select a small portion of the model to personalize, thus imposing a much smaller on-device memory footprint. We propose two federated optimization algorithms for training partially personalized models, where the shared and personal parameters are updated either simultaneously or alternately on each device, but only the shared parameters are communicated and aggregated at the server. We give convergence analyses of both algorithms for minimizing smooth nonconvex functions, providing theoretical support of them for training deep learning models. Our experiments on real-world image and text datasets demonstrate that (a) partial model personalization can obtain most of the benefit of full model personalization with a small fraction of personalized parameters, and, (b) the alternating update algorithm often outperforms the simultaneous update algorithm.
1 INTRODUCTION
Federated Learning (McMahan et al., 2017) has emerged as a powerful paradigm for distributed and privacy-preserving machine learning over a large number of edge devices (see Kairouz et al., 2021, and references therein). We consider a typical setting of Federated Learning (FL) with n devices (also called clients), where each device i has a training dataset of Ni samples zi,1, · · · , zi,Ni . Let w ∈ Rd represent the parameters of a (supervised) learning model and fi(w, zi,j) be the loss of the model on the training example zi,j . Then the loss function associated with device i is Fi(w) = (1/Ni) ∑Ni j=1 fi(w, zi,j). A common objective of FL is to find model parameters that minimize the weighted average loss across all devices (without transferring the datasets):
minimize w n∑ i=1 αiFi(w), (1)
where weights αi are nonnegative and satisfy ∑n i=1 αi = 1. A common practice is to choose the
weights as αi = Ni/N where N = ∑n k=1Nk, which corresponds to minimizing the unweighted
average loss across all samples from the n devices: (1/N) ∑n i=1 ∑Ni j=1 fi(w, zi,j).
The main motivation for minimizing the average loss over all devices is to leverage their collective statistical power for better generalization, because the amount of data on each device can be very limited. This is especially important for training modern deep learning models with large number of parameters. However, this argument assumes that the datasets from different devices are sampled from the same, or at least very similar, distributions. Given the diverse characteristics of the users and increasing trend of personalized on-device services, such an i.i.d. assumption may not hold in practice. Thus, the one-model-fits-all formulation in (1) can be less effective and even undesirable.
Several approaches have been proposed for personalized FL, including ones based on multi-task learning (Smith et al., 2017), meta learning (Fallah et al., 2020), and proximal methods (Dinh et al., 2020; Li et al., 2021). A simple formulation that captures their main idea is
minimize w0,{wi}ni=1 n∑ i=1 αi ( Fi(wi) + λi 2 ‖wi − w0‖2 ) , (2)
where wi for i = 1, . . . , n are personalized model parameters at the devices, w0 is a reference model maintained by the server, and the λi’s are regularization weights that control the extent of personalization. A major disadvantage of the formulation (2), which we call full model personalization, is that it requires twice the memory footprint of the model, wi and w0 at each device, which severely limits the size of trainable models. On the other hand, the flexibility of full model personalization can be unnecessary. Modern deep learning models are composed of many simple functional units and are typically organized into layers or a more general interconnected architecture. Personalizing the “right” components, selected with domain knowledge, may result in a substantial benefit with only a small increase in memory footprint. In addition, partial model personalization can be less susceptible to “catastrophic forgetting” (McCloskey & Cohen, 1989), where a large model finetuned on a small local dataset forgets the original (non-personalized) task, leading to a degradation of test performance.
We propose a framework for FL with partial model personalization. Specifically, we partition the model parameters into two groups: the shared parameters u ∈ Rd0 and the personal parameters vi ∈ Rdi for i = 1, . . . , n. The full model on device i is denoted as wi = (u, vi), and the local loss function is Fi(u, vi) = (1/Ni) ∑Ni i=1 fi ( (u, vi), zi,j ) . Our goal is to solve the optimization problem
minimize u, {vi}ni=1 n∑ i=1 αiFi(u, vi). (3)
Notice that the dimensions of vi can be different across the devices, allowing the personal components of the model to have different number of parameters or even different architecture.
We investigate two FL algorithms for solving problem (3): FedSim, a simultaneous update algorithm and FedAlt, an alternating update algorithm. Both algorithms follow the standard FL protocol. During each round, the server randomly selects a subset of the devices for update and broadcasts the current global version of the shared parameters to devices in the subset. Each selected device then performs one or more steps of (stochastic) gradient descent to update both the shared parameters and the personal parameters, and sends the updated shared parameters to the server for aggregation. The updated personal parameters are kept local at the device to serve as the initial states when the device is selected for another update. In FedSim, the shared and personal parameters are updated simultaneously during each local iteration. In FedAlt, the devices first update the personal parameters with the received shared parameters fixed and then update the shared parameters with the new personal parameters fixed. We provide convergence analysis and empirical evaluation of both methods.
The main contributions of this paper are summarized as follows:
• We propose a general framework of FL with partial model personalization, which relies on domain knowledge to select a small portion of the model to personalize, thus imposing a much smaller memory footprint on the devices than full model personalization. This framework unifies existing work on personalized FL and allows arbitrary partitioning of deep learning models.
• We provide convergence guarantees for the FedSim and FedAlt methods in the general (smooth) nonconvex setting. While both methods have appeared in the literature previously, they are either used without convergence analysis or with results on limited settings (assuming convexity or full participation) Our analysis provides theoretical support for the general nonconvex setting with partial participation. The analysis of FedAlt with partial participation is especially challenging and we develop a novel technique of virtual full participation to overcome the difficulties.
• We conduct extensive experiments on image classification and text prediction tasks, exploring different model personalization strategies for each task, and comparing with several strong baselines. Our results demonstrate that partial model personalization can obtain most of the benefit of full model personalization with a small fraction of personalized parameters, and FedAlt often outperforms FedSim.
• Our experiments also reveal that personalization (full or partial) may lead to worse performance for some devices, despite improving the average. Typical forms of regularization such as weight decay and dropout do not mitigate this issue. This phenomenon has been overlooked in previous work and calls for future research to improve both performance and fairness.
Related work. Specific forms of partial model personalization have been considered in previous works. Liang et al. (2019) propose to personalize the input layers to learn a personalized representation per-device (Figure 1a), while Arivazhagan et al. (2019) and Collins et al. (2021) propose to personalize the output layer while learning a shared representation with the input layers (Figure 1b). Both FedSim and FedAlt have appeared in the literature before, but the scope of their convergence analysis is limited. Specifically, Liang et al. (2019), Arivazhagan et al. (2019) and Hanzely et al. (2021) use FedSim, while Collins et al. (2021) and Singhal et al. (2021) proposed variants of FedAlt. Notably, Hanzely et al. (2021) establish convergence of FedSim with full device participation in the convex and non-convex cases, while Collins et al. (2021) prove the linear convergence of FedAlt for a two-layer linear network where Fi(·, vi) and Fi(u, ·) are both convex for fixed vi and u respectively. We analyze both FedSim and FedAlt in the general nonconvex case with partial device participation, hence addressing a more general and practical setting.
While we primarily consider the problem (3) in the context of partial model personalization, it can serve as a general formulation that covers many other problems. Hanzely et al. (2021) demonstrate that various full model personalization formulations based on regularization (Dinh et al., 2020; Li et al., 2021), including (2), as well as interpolation (Deng et al., 2020; Mansour et al., 2020) are special cases of this problem. The rates of convergence we prove in §3 are competitive with or better than those in previous works for full model personalization methods in the non-convex case.
2 PARTIALLY PERSONALIZED MODELS
Modern deep learning models all have a multi-layer architecture. While a complete understanding of why they work so well is still out of reach, a general insight is that the lower layers (close to the input) are mostly responsible for feature extraction and the upper layers (close to the output) focus on complex pattern recognition. Depending on the application scenarios and domain knowledge, we may personalize either the input layer(s) or the output layer(s) of the model; see Figure 1.
In Figure 1c, the input layers are split horizontally into two parts, one shared and the other personal. They process different chunks of the input vector and their outputs are concatenated before feeding
Algorithm 1 Federated Learning with Partial Model Personalization (FedSim / FedAlt)
Input: initial states u(0), {v(0)i }ni=1, number of rounds T , number of devices per round m 1: for t = 0, 1, · · · , T − 1 do 2: server randomly samples m devices as S(t) ⊂ {1, . . . , n} 3: server broadcasts u(t) to each device in S(t) 4: for each device i ∈ S(t) in parallel, do 5: ( u
(t+1) i , v (t+1) i
) = LocalSim / LocalAlt ( u(t), v
(t) i
) . v
(t+1) i = v (t) i if i /∈ S(t)
6: send u(t+1)i back to server 7: server updates u(t+1) = 1m ∑ i∈S(t) u (t+1) i
to the upper layers of the model. As demonstrated in (Bui et al., 2019), this partitioning can help protect user-specific private features (input 2 in Figure 1c) as the corresponding feature embedding (through vi) are personalized and kept local at the device. Similar architectures have also been proposed in context-dependent language models (e.g., Mikolov & Zweig, 2012).
A more structured partitioning is illustrated in Figure 2a, where a typical transformer layer (Vaswani et al., 2017) is augmented with two adapters. This architecture is proposed by Houlsby et al. (2019) for finetuning large language models. Similar residual adapter modules are proposed by Rebuffi et al. (2017) for image classification models in the context of multi-task learning. In the context of FL, we treat the adapter parameters as personal and the rest of the model parameters as shared.
Figure 2b shows a generalized additive model, where the outputs of two separate models, one shared and the other personalized, are fused to generate a prediction. Suppose the shared model is h(u, ·) and the personal model is hi(vi, ·). For regression tasks with samples zi,j = (xi,j , yi,j), where xi,j is the input and yi,j ∈ Rp is the output, we let Fi(u, vi) = (1/Ni) ∑Ni j=1 fi ( (u, vi), zi,j ) with
fi ( (u, vi), zi,j ) = ‖yi,j − h(u, xi,j)− hi(vi, xi,j)‖2 .
In this special case, the personal model fits the residual of the shared model and vice-versa (Agarwal et al., 2020). For classification tasks, h(u, ·) and hi(vi, ·) produce probability distributions over multiple classes. We can use the cross-entropy loss between yi,j and a convex combination of the two model outputs: θh(u, xi,j) + (1− θ)hi(vi, xi,j), where θ ∈ (0, 1) is a learnable parameter. Finally, we can cast the formulation (2) of full model personalization as a special case of (3) by letting
u← w0, vi ← wi, Fi(u, vi)← Fi(vi) + (λi/2)‖vi − u‖2. Many other formulations of full personalization can be reduced to (3); see Hanzely et al. (2021).
3 ALGORITHMS AND CONVERGENCE ANALYSIS
In this section, we present and analyze two FL algorithms for solving problem (3). To simplify presentation, we denote V = (v1, . . . , vn) ∈ Rd1+...+dn and focus on the case of αi = 1/n, i.e.,
minimizeu, V F (u, V ) := 1 n ∑n i=1 Fi(u, vi). (4)
This is equivalent to (3) if we scale Fi by nαi, thus does not lose generality. Moreover, we consider more general local functions Fi(u, vi) = Ez∼Di [fi((u, vi), z)], where Di is the local distribution. The FedSim and FedAlt algorithms share a common outer-loop description given in Algorithm 1. They differ only in the local update procedures LocalSim and LocalAlt, which are given in Algorithm 2 and Algorithm 3 respectively. In the two local update procedures, ∇̃u and ∇̃v represent stochastic gradients with respect to w and vi respectively. In LocalSim (Algorithm 2), the personal variables vi and local version of the shared parameters ui are updated simultaneously, with their (stochastic) partial gradients evaluated at the same point. In LocalAlt (Algorithm 3), the personal parameters are updated first with the received shared parameters fixed, then the shared parameters are updated with the new personal parameters fixed. They are analogous to the classical Jacobi update and Gauss-Seidel update in numerical linear algebra (e.g., Demmel, 1997, §6.5).
In order to analyze the convergence of the two algorithms, we make the following assumptions.
Algorithm 2 LocalSim ( u, vi ) Input: number of steps τ , step sizes γv and γu
1: initialize vi,0 = vi 2: initialize ui,0 = u 3: for k = 0, 1, · · · , τ − 1 do 4: vi,k+1 = vi,k − γv∇̃vFi ( ui,k, vi,k
) 5: ui,k+1 = ui,k − γu∇̃uFi ( ui,k, vi,k
) 6: update v+i = vi,τ 7: update u+i = ui,τ 8: return ( u+i , v + i )
Algorithm 3 LocalAlt ( u, vi ) Input: number of steps τv, τu, step sizes γv, γu
1: initialize vi,0 = vi 2: for k = 0, 1, · · · , τv−1 do 3: vi,k+1 = vi,k − γv∇̃vFi ( u, vi,k ) 4: update v+i = vi,τv and initialize ui,0 = u 5: for k = 0, 1, · · · , τu−1 do 6: ui,k+1 = ui,k − γu∇̃uFi ( ui,k, v + i
) 7: update u+i = ui,τu 8: return ( u+i , v + i
) Assumption 1 (Smoothness). The function Fi is continuously differentiable for each i = 1, . . . , n, and there exist constants Lu, Lv , Luv and Lvu such that for each i = 1, . . . , n, it holds that
• ∇uFi(u, vi) is Lu-Lipschitz with respect to u and Luv-Lipschitz with respect to vi; • ∇vFi(u, vi) is Lv-Lipschitz with respect to vi and Lvu-Lipschitz with respect to u.
Due to the definition of F (u, V ) in (4), it is easy to verify that∇uF (u, V ) has Lipschitz constant Lu with respect to u, Luv/ √ n with respect to V , and Luv/n with respect to any vi. We also define
χ := max{Luv, Lvu} /√
LuLv, (5) which measures the relative cross-sensitivity of ∇uFi with respect to vi and ∇vFi with respect to u. Assumption 2 (Bounded Variance). The stochastic gradients in Algorithm 2 and Algorithm 3 are unbiased and have bounded variance. That is, for all u and vi,
E [ ∇̃uFi(u, vi) ] = ∇uFi(u, vi), E [ ∇̃vFi(u, vi) ] = ∇vFi(u, vi) .
Furthermore, there exist constants σu and σv such that E [∥∥∇̃uFi(u, vi)−∇uFi(u, vi)∥∥2] ≤ σ2u , E[∥∥∇̃vFi(u, vi)−∇vFi(u, vi)∥∥2] ≤ σ2v .
We can view ∇uFi(u, vi), when i is randomly sampled from {1, . . . , n}, as a stochastic partial gradient of F (u, V ) with respect to u. The following assumption imposes a variance bound. Assumption 3 (Partial Gradient Diversity). There exist δ ≥ 0 and ρ ≥ 0 such that for all u and V ,
1 n ∑n i=1 ∥∥∇uFi(u, vi)−∇uF (u, V )∥∥2 ≤ δ2 + ρ2∥∥∇uF (u, V )∥∥2 . With ρ = 0, this assumption is similar to a constant variance bound on the stochastic gradient ∇uFi(u, vi); with ρ > 0, it allows the variance to grow with the norm of the full gradient.
Throughout this paper, we assume F is bounded below by F ? and denote ∆F0 = F ( u(0), V (0) ) −F ?. Further, we use shorthand V (t) = (v(t)1 , . . . , v (t) n ) and
∆ (t) u = ∥∥∇uF (u(t), V (t))∥∥2 , and ∆(t)v = 1n∑ni=1∥∥∇vFi(u(t), v(t)i )∥∥2 . For smooth and nonconvex loss functions Fi, we obtain convergence in expectation to a stationary point of F if the expected values of these two sequences converge to zero.
We first present our main result for FedSim (Algorithm 1 with LocalSim), proved in Appendix A.2. Theorem 1 (Convergence of FedSim). Suppose Assumptions 1, 2 and 3 hold and the learning rates in FedSim are chosen as γu = η/(Luτ) and γv = η/(Lvτ) with
η ≤ min {
1 12(1+χ2)(1+ρ2) ,
√ m/n
196(1−τ−1)(1+χ2)(1+ρ2)
} .
Then, ignoring absolute constants, we have
1 T ∑T−1 t=0 ( 1 Lu E [ ∆ (t) u ] + mnLv E [ ∆ (t) v ]) ≤ ∆F0ηT + η(1 + χ 2) ( σ2u+δ 2(1−mn ) mLu + mσ2v nLv ) + η2(1− τ−1)(1 + χ2) ( σ2u+δ 2
Lu + σ2v Lv
) . (6)
The left-hand side of (6) is the average over time of a weighted sum of E [ ∆ (t) u ] and E [ ∆ (t) v ] . The right-hand side contains three terms of order O(1/(ηT )), O(η) and O(η2) respectively. We can minimize the right-hand side by optimizing over η. By considering special cases such as σ2u = σ 2 v = 0 and m = n, some terms on the right-hand side disappear and we can obtain improved rates. Table 1 shows the results in several different regimes along with the optimal choices of η.
Challenge in Analyzing FedAlt. We now turn to FedAlt. Note that the personal parameters are updated only for the m selected devices in S(t) in each round t. Specifically,
v (t+1) i =
{ v
(t) i − γv ∑τv k=0 ∇̃vFi ( u(t), v (t) i,k ) if i ∈ S(t), v (t) i if i /∈ S(t).
Consequently, the vector V (t+1) of personal parameters depends on the random variable S(t). This makes it challenging to analyze the u-update steps of FedAlt because they are performed after V (t+1) is generated (as opposed to simultaneously in FedSim). When we take expectations with respect to the sampling of S(t) in analyzing the u-updates, V (t+1) becomes a dependent random variable, which prevents standard proof techniques from going through (see details in Appendix A.3).
We develop a novel technique called virtual full participation to overcome this challenge. Specifically, we define a virtual vector Ṽ (t+1), which is the result if every device were to perform local v-updates. It is independent of the sampling of S(t) and we can derive a convergence rate for related quantities. We carefully translate this rate from the virtual Ṽ (t+1) to the actual V (t) to get the following result. Theorem 2 (Convergence of FedAlt). Suppose Assumptions 1, 2 and 3 hold and the learning rates in FedAlt are chosen as γu = η/(Luτu) and γv = η/(Lvτv), with
η ≤ min {
1 24(1+ρ2) , m 128χ2(n−m) , √ m χ2n } .
Then, ignoring absolute constants, we have
1 T ∑T−1 t=0 ( 1 Lu E [ ∆ (t) u ] + mnLv E [ ∆ (t) v ]) ≤ ∆F0
ηT + η
( σ2u+δ
2(1−mn ) mLu + σ2v Lv m+χ2(n−m) n ) + η2 ( σ2u+δ 2
Lu (1− τ−1u ) + σ2vm Lvn (1− τ−1v ) + χ2σ2v Lv
) .
The proof of Theorem 2 is given in Appendix A.3. Similar to the results for FedSim, we can choose η to minimize the above upper bound to obtain the best convergence rate, as summarized in Table 1.
Comparing FedSim and FedAlt. Table 1 shows that both FedSim and FedAlt exhibit the standard O(1/ √ T ) rate in the general case. Comparing the constants in their rates, we identify two regimes in terms of problem parameters. The regime where FedAlt dominates FedSim is characterized by σ2v Lv ( 1− 2mn ) < σ2u+δ 2(1−m/n) mLu . A practically relevant scenario where this is true is σ2v ≈ 0 and σ2u ≈ 0 from using large or full batch on a small number of samples per device. Here, the rate of FedAlt is better than FedSim by a factor of (1 + χ2), indicating that the rate of FedAlt is less affected by the coupling between the personal and shared parameters. Our experiments in §4 corroborate the practical relevance of this regime.
The rates from Table 1 also apply for full personalization schemes without convergence guarantees in the nonconvex case (Agarwal et al., 2020; Mansour et al., 2020; Li et al., 2021). Our rates are better than those of (Dinh et al., 2020) for their pFedMe objective.
4 EXPERIMENTS
In this section, we experimentally compare different model personalization schemes using FedAlt and FedSim as well as no model personalization. Details about the experiments, hyperparameters and additional results are provided in the appendices. The code to reproduce the experimental results will be publicly released.
Datasets, Tasks and Models. We consider three learning tasks; they are summarized in Table 2.
(a) Next-Word Prediction: We use the StackOverflow dataset, where each device corresponds to the questions and answers of one user on stackoverflow.com. This task is representative of mobile keyboard predictions. We use a 4-layer transformer model (Vaswani et al., 2017).
(b) Visual Landmark Recognition: We use the GLDv2 dataset (Weyand et al., 2020; Hsu et al., 2020), a large-scale dataset with real images of global landmarks. Each device corresponds to a Wikipedia contributor who uploaded images. This task resembles a scenario where smartphone users capture images of landmarks while traveling. We use a ResNet-18 (He et al., 2016) model with group norm instead of batch norm (Hsieh et al., 2020) and images are reshaped to 224× 224.
(c) Character Recognition: We use the EMNIST dataset (Cohen et al., 2017), where the input is a 28 × 28 grayscale image of a handwritten character and the output is its label (0-9, a-z, A-Z). Each device corresponds to a writer of the character. We use a ResNet-18 model, with input and output layers modified to accommodate the smaller image size and number of classes.
All models are trained with the cross entropy loss and evaluated with top-1 accuracy of classification.
Model Partitioning for Partial Personalization. We consider three partitioning schemes.
(a) Input layer personalization: This architecture learns a personalized representation per-device by personalizing the input layer, while the rest of the model is shared (Figure 1a). For the transformer, we use the first transformer layer in place of the embedding layer.
(b) Output layer personalization: This architecture learns a shared representation but personalizes the prediction layer (Figure 1b). For the transformer model, we use the last transformer layer instead of the output layer.
(c) Adapter personalization: In this architecture, each device adds lightweight personalized adapter modules between specific layers of a shared model (Figure 2a). We use the transformer adapters of Houlsby et al. (2019) and for ResNet-18, the residual adapters of Rebuffi et al. (2017).
Algorithms and Experimental Pipeline. For full model personalization, we consider three baselines: (i) Finetune, where each device finetunes (using SGD locally) its personal full model starting from a learned common model, (ii) Ditto (Li et al., 2021), which is finetuning with `2 regularization, and, (iii) pFedMe (Dinh et al., 2020) which minimizes the objective (2). All methods, including FedSim, FedAlt and the baselines are initialized with a global model trained with FedAvg.
4.1 EXPERIMENTAL RESULTS
Partial personalization nearly matches full personalization and can sometimes outperform it. Table 3 shows the average test accuracy across all devices of different FL algorithms. We see that on the StackOverflow dataset, output layer personalization (25.05%) makes up nearly 90% of the gap between the non-personalized baseline (23.82%) and full personalization (25.21%). On EMNIST, adapter personalization exactly matches full personalization. Most surprisingly, on GLDv2, adapter personalization outperforms full personalization by 3.5pp (percentage points).
This success of adapter personalization can be explained partly by the nature of GLDv2. On average, the training data on each device contains 25 classes out of a possible 2028 while the testing data contains 10 classes not seen in its own training data. These unseen classes account for nearly 23% of all testing data. Personalizing the full model is susceptible to “forgetting” the original task (Kirkpatrick et al., 2017), making it harder to get these unseen classes right. Such catastrophic forgetting is worse when finetuning on a very small local dataset, as we often have in FL. On the other hand, personalizing the adapters does not suffer as much from this issue (Rebuffi et al., 2017).
Partial personalization only requires a fraction of the parameters to be personalized. Figure 3 shows that the number of personalized parameters required to compete with full model personalization is rather small. On StackOverflow, personalizing 1.2% of the parameters with adapters captures 72% of the accuracy boost from personalizing all 5.7M parameters; this can be improved to nearly 90% by personalizing 14% of the parameters (output layer). Likewise, we match full personalization on EMNIST and exceed it on GLDv2 with adapters, personalizing 11.5-12.5% of parameters.
The best personalized architecture is model and task dependent. Table 3 shows that personalizing the final transformer layer (denoted as “Output Layer”) achieves the best performance for StackOverflow, while the residual adapter achieves the best performance for GLDv2 and EMNIST. This shows that the approach of personalizing a fixed model part, as in several past works, is suboptimal. Our framework allows for the use of domain knowledge to determine customized personalization.
Finetuning is competitive with other full personalization methods. Full finetuning matches the performance of pFedMe and Ditto on StackOverflow and EMNIST. On GLDv2, however, pFedMe outperforms finetuning by 0.07pp, but is still 3.5pp worse than adapter personalization.
FedAlt outperforms FedSim for partial personalization. If the optimization problem (3) were convex, we would expect similar performance from FedAlt and FedSim. However, with nonconvex optimization problems such as the ones considered here, the choice of the optimization algorithm often affects the quality of the solution found. We see from Table 4 that FedAlt is almost always better than FedSim by a small margin, e.g., 0.08pp for StackOverflow/Adapter and 0.3pp for GLDv2/Input Layer. FedSim in turn yields a higher accuracy than simply finetuning the personalized part of the model, by a large margin, e.g., 0.12pp for StackOverflow/Output Layer and 2.55pp for GLDv2/Adapter.
4.2 EFFECTS OF PERSONALIZATION ON PER-DEVICE GENERALIZATION
Personalization hurts the test accuracy on some devices. Figure 4 shows the change in training and test accuracy of each device, compared with a non-personalized model trained by FedAvg. We see that personalization leads to an improvement in training accuracy across all devices, but a reduction in test accuracy on some of the devices over the non-personalized baseline. In particular, devices whose testing performance is hurt by personalization are mostly on the left side of the plot, meaning that they have relatively small number of training samples. On the other hand, many devices with the most improved test accuracy also appear on the left side, signaling the benefit of personalization. Therefore, there is a large variation of results for devices with few samples.
Additional experiments (see Appendix C) show that using `2 regularization, as in (2), or weight decay does not mitigate this issue. In particular, increasing regularization strength (less personalization) can reduce the spread of per-device accuracy, but only leads to a worse average accuracy that is close to using a common model. Other simple strategies such as dropout also do not fix this issue.
An ideal personalized method would boost performance on most of the devices without causing a reduction in (test) accuracy on any device. Realizing this goal calls for a sound statistical analysis for personalized FL and may require sophisticated methods for local performance diagnosis and more structured regularization. These are very promising directions for future research.
5 DISCUSSION
In addition to a much smaller memory footprint than full model personalization and being less susceptible to catastrophic forgetting, partial model personalization has other advantages. For example, it reduces the amount communication between the server and the devices because only the shared parameters are transmitted. While the communication saving may not be significant (especially when the personal parameters are only a small fraction of the full model), communicating only the shared parameters may have significant implications for privacy. Intuitively, it can be harder to infer private information from partial model information. This is especially the case if the more sensitive features of the data are processed through personal components of the model that are kept local at the devices. For example, we speculate that less noise needs to be added to the communicated parameters in order to satisfy differential privacy requirements (Abadi et al., 2016). This is a very promising direction for future research.
REPRODUCIBILITY STATEMENT
For theoretical results, we state and discuss the assumptions in Appendix A. The full proofs of all theoretical statements are also given there.
For our numerical results, we take multiple steps for reproducibility. First, we run each numerical experiment for five random seeds, and report both the mean and standard deviation over these runs. Second, we only use publicly available datasets and report the preprocessing at length in Appendix B. Third, we give the full list of hyperparameters used in our experiments in Table 8 in Appendix B. Finally, we will publicly release the code to reproduce the our experimental results.
ETHICS STATEMENT
The proposed framework for partial model personalization is immediately applicable for a range of practical federated learning applications in edge devices such as text prediction and speech recognition. One of key considerations of federated learning is privacy. Partial model personalization maintains the all the privacy benefits of current non-personalized federated learning systems. Indeed, our approach is compatible with techniques to enhance privacy such as differential privacy and secure aggregation. We also speculate that partial personalization has the potential for further reducing the privacy footprint — an investigation of this subject is beyond the scope of this work and is an interesting direction for future work.
On the flip side, we also observed in experiments that personalization (both full or partial) leads to a reduction in test performance on some of the devices. This has important implications for fairness, and calls for further research into the statistical aspects of personalization, performance diagnostics as well as more nuanced definitions of fairness in federated learning.
Appendix
Table of Contents
A Convergence Analysis: Full Proofs 14
A.1 Review of Setup and Assumptions . . . . . . . . . . . . . . . . . . . . . . . . 14 A.2 Convergence Analysis of FedSim . . . . . . . . . . . . . . . . . . . . . . . . . 15 A.3 Convergence Analysis of FedAlt . . . . . . . . . . . . . . . . . . . . . . . . . 22 A.4 Technical Lemmas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
B Experiments: Detailed Setup and Hyperparameters 31
B.1 Datasets, Tasks and Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 B.2 Experimental Pipeline and Baselines . . . . . . . . . . . . . . . . . . . . . . . 34 B.3 Hyperparameters and Evaluation Details . . . . . . . . . . . . . . . . . . . . . 35
C Experiments: Additional Results 36
C.1 Ablation: Final Finetuning for FedAlt and FedSim . . . . . . . . . . . . . . . . 36 C.2 Effect of Personalization on Per-Device Generalization . . . . . . . . . . . . . . 37 C.3 Partial Personalization for Stateless Devices . . . . . . . . . . . . . . . . . . . 38
A CONVERGENCE ANALYSIS: FULL PROOFS
We give the full convergence proofs here. The outline of this section is:
• §A.1: Review of setup and assumptions; • §A.2: Convergence analysis of FedSim and the full proof of Theorem 1; • §A.3: Convergence analysis of FedAlt and the full proof of Theorem 2; • §A.4: Technical lemmas used in the analysis.
A.1 REVIEW OF SETUP AND ASSUMPTIONS
We consider a federated learning system with n devices. Let the loss function on device i be Fi(u, vi), where u ∈ Rd0 denotes the shared parameters across all devices and vi ∈ Rdi denotes the personal parameters at device i. We aim to minimize the function
F (u, V ) := 1
n n∑ i=1 Fi(u, vi) , (7)
where V = (v1, · · · , vn) is a concatenation of all the personalized parameters. This is a special case of (3) with the equal per-device weights, i.e., αi = 1/n. Recall that we assume that F is bounded from below by F ?.
For convenience, we reiterate Assumptions 1, 2 and 3 from the main-paper as Assumptions 1′, 2′ and 3′ below respectively, with some additional comments and discussion. Assumption 1′ (Smoothness). For each device i = 1, . . . , n, the objective Fi is smooth, i.e., it is continuously differentiable and,
(a) u 7→ ∇uFi(u, vi) is Lu-Lipschitz for all vi, (b) vi 7→ ∇vFi(u, vi) is Lv-Lipschitz for all u, (c) vi 7→ ∇uFi(u, vi) is Luv-Lipschitz for all u, and, (d) u 7→ ∇vFi(u, vi) is Lvu-Lipschitz for all vi. Further, we assume for some χ > 0 that
max{Luv, Lvu} ≤ χ √ LuLv .
The smoothness assumption is a standard one. We can assume without loss of generality that the cross-Lipschitz coefficients Luv, Lvu are equal. Indeed, if Fi is twice continuously differentiable, we can show that Luv, Lvu are both equal to the operator norm ‖∇2uvFi(u, vi)‖op of the mixed second derivative matrix. Further, χ denotes the extent to which u impacts the gradient of vi and vice-versa.
Our next assumption is about the variance of the stochastic gradients, and is standard in literature. Compared to the main paper, we adopt a more precise notation about stochastic gradients. Assumption 2′ (Bounded Variance). Let Di denote a probability distribution over the data space Z on device i. There exist functions Gi,u and Gi,v which are unbiased estimates of ∇uFi and ∇vFi respectively. That is, for all u, vi:
Ez∼Di [Gi,u(u, v, z)] = ∇uFi(u, vi), and Ez∼Di [Gi,v(u, v, z)] = ∇vFi(u, vi) .
Furthermore, the variance of these estimators is at most σ2u and σ 2 v respectively. That is,
Ez∼Di‖Gi,u(u, v, z)−∇uFi(u, vi)‖ 2 ≤ σ2u ,
Ez∼Di‖Gi,v(u, v, z)−∇vFi(u, vi)‖ 2 ≤ σ2v .
In practice, one usually has Gi,u(u, vi, z) = ∇ufi((u, vi), z), which is the gradient of the loss on datapoint z ∼ Di under the model (u, vi), and similarly for Gi,v . Finally, we make a gradient diversity assumption. Assumption 3′ (Partial Gradient Diversity). There exist δ ≥ 0 and ρ ≥ 0 such that for all u and V ,
1
n n∑ i=1 ‖∇uFi(u, vi)−∇uF (u, V )‖2 ≤ δ2 + ρ2‖∇uF (u, V )‖2 . (8)
Algorithm 4 FedSim: Simultaneous update of shared and personal parameters
Input: Initial iterates u(0), V (0), Number of communication rounds T , Number of devices per round m, Number of local updates τ , Local step sizes γu, γv .
1: for t = 0, 1, · · · , T − 1 do 2: Sample m devices from [n] without replacement in S(t) 3: for each selected device i ∈ S(t) in parallel do 4: Initialize v(t)i,0 = v (t) i and u (t) i,0 = u (t) 5: for k = 0, · · · , τ − 1 do . Update all parameters jointly 6: Sample data z(t)i,k ∼ Di 7: v(t)i,k+1 = v (t) i,k − γvGi,v(u (t) i,k, v (t) i,k, z (t) i,k) 8: u(t)i,k+1 = u (t) i,k − γuGi,u(u (t) i,k, v (t) i,k, z (t) i,k) 9: Update v(t+1)i = v (t) i,τ and u (t+1) i = u (t) i,τ
10: Update u(t+1) = ∑ i∈S(t) αiu (t+1) i∑
i∈S(t) αi at the server with secure aggregation
11: return u(T ), v(T )1 , · · · , v (T ) n
This assumption is analogous to the bounded variance assumption (Assumption 2′), but with the stochasticity coming from the sampling of devices. It characterizes how much local steps on one device help or hurt convergence globally. Similar gradient diversity assumptions are often used for analyzing non-personalized federated learning (Koloskova et al., 2020; Karimireddy et al., 2020). Finally, it suffices for the partial gradient diversity assumption to only hold at the iterates (u(t), V (t)) generated by either FedSim or FedAlt.
A.2 CONVERGENCE ANALYSIS OF FEDSIM
We give the full form of FedSim in Algorithm 4 for the general case of unequal αi’s but focus on αi = 1/n for the analysis. In order to simplify presentation, we denote V (t) = (v (t) 1 , . . . , v (t) n ) and define the following shorthand for gradient terms
∆(t)u = ∥∥∥∇uF (u(t), V (t))∥∥∥2 , and ∆(t)v = 1n n∑ i=1 ∥∥∥∇vFi (u(t), v(t)i )∥∥∥2 . For convenience, we restate Theorem 1 from the main paper. Theorem 1 (Convergence of FedSim). Suppose Assumptions 1, 2 and 3 hold and the learning rates in FedSim are chosen as γu = η/(Luτ) and γv = η/(Lvτ) with
η ≤ min {
1 12(1+χ2)(1+ρ2) ,
√ m/n
196(1−τ−1)(1+χ2)(1+ρ2)
} .
Then, ignoring absolute constants, we have
1 T ∑T−1 t=0 ( 1 Lu E [ ∆ (t) u ] + mnLv E [ ∆ (t) v ]) ≤ ∆F0ηT + η(1 + χ 2) ( σ2u+δ 2(1−mn ) mLu + mσ2v nLv ) + η2(1− τ−1)(1 + χ2) ( σ2u+δ 2
Lu + σ2v Lv
) . (6)
Before proving the theorem, we give the following corollary with optimized learning rates. Corollary 3. Consider the setting of Theorem 1 and let ε > 0 be given. Suppose we set the learning rates γu = η/(τLu) and γv = η/(τLv), where (ignoring absolute constants),
η = ε(
δ2 Lu ( 1− mn ) + σ2u Lu + σ2vm Lun ) (1 + χ2)
∧ ε( δ2
Lu ∨ σ 2 u Lu ∨ σ 2 v Lv
) (1− τ−1)(1 + χ2)
1/2
∧ 1 (1 + χ2)(1 + ρ2) ∧( m/n (1− τ−1)(1 + ρ2)(1 + χ2) )1/2 .
We have,
1
T T−1∑ t=0
( 1 Lu E ∥∥∥∇uF (u(t), V (t))∥∥∥2 + m Lvn2 n∑ i=1 E ∥∥∥∇vFi (u(t), v(t)i )∥∥∥2 ) ≤ ε
after T communication rounds, where, ignoring absolute constants,
T ≤ ∆F0(1 + χ 2)
ε2
( σ2u + δ 2 ( 1− mn ) mLu + mσ2v nLv )
+ ∆F0
√ (1− τ−1)(1 + χ2)
ε3/2
( σu + δ√ Lu + σv√ Lv ) +
∆F0 ε (1 + χ2)(1 + ρ2)
( 1 + √ (1− τ−1)n
m
) .
Proof. The choice of the constant η ensures that each of the constant terms in the bound of Theorem 1 is O(ε). The final rate is now O ( ∆F0/(ηε) ) ; plugging in the value of η completes the proof.
We now prove Theorem 1.
Proof of Theorem 1. The proof mainly applies the smoothness upper bound to write out a descent condition with suitably small noise terms. We start with some notation.
Notation. Let F (t) denote the σ-algebra generated by ( u(t), V (t) ) and denote Et[·] = E[·|F (t)]. For all devices, including those not selected in each round, we define virtual sequences ũ(t)i,k, ṽ (t) i,k as the SGD updates in Algorithm 4 for all devices regardless of whether they are selected. For the selected devices k ∈ S(t), we have ( u
(t) i,k, v (t) i,k
) = ( ũ
(t) i,k, ṽ (t) i,k ) . Note now that the random variables ũ(t)i,k, ṽ (t) i,k
are independent of the device selection S(t). The updates for the devices i ∈ S(t) are given by
v (t+1) i = v (t) i − γv τ−1∑ k=0 Gi,v ( ũ (t) i,k, ṽ (t) i,k, z (t) i,k ) ,
and the server update is given by
u(t+1) = u(t) − γu m ∑ i∈S(t) τ−1∑ k=0 Gi,u ( ũ (t) i,k, ṽ (t) i,k, z (t) i,k ) . (9)
Proof Outline. We use the smoothness of Fi, more precisely Lemma 16, to obtain F ( u(t+1), V (t+1) ) − F ( u(t), V (t) ) ≤ 〈 ∇uF (u(t), V (t)), u(t+1) − u(t)
〉 ︸ ︷︷ ︸
T1,u
+ 1
n n∑ i=1 〈 ∇vFi(u(t), v(t)i ), v (t+1) i − v (t) i 〉 ︸ ︷︷ ︸
T1,v
+ Lu(1 + χ
2)
2 ∥∥∥u(t+1) − u(t)∥∥∥2︸ ︷︷ ︸ T2,u + 1 n n∑ i=1 Lv(1 + χ 2) 2 ∥∥∥v(t+1)i − v(t)i ∥∥∥2︸ ︷︷ ︸ T2,v .
(10)
Our goal will be to bound each of these terms to get a descent condition from each step of the form
Et
[ F ( u(t+1), V (t+1) ) − F ( u(t), V (t) )] ≤ −γuτ
8 ∥∥∥∇uF (u(t), V (t))∥∥∥2 − γvτm 8n2 n∑ i=1 ∥∥∥∇vFi(u(t), v(t)i )∥∥∥2 +O(γ2u + γ2v) ,
where the O(γ2u + γ 2 v) terms are controlled using the bounded variance and gradient diversity assumptions. Telescoping this descent condition gives the final bound.
Main Proof. Towards this end, we prove non-asymptotic bounds on each of the terms T1,v, T1,u, T2,v and T2,u, in Claims 4 to 7 respectively. We then invoke them to get the bound
Et
[ F ( u(t+1), V (t+1) ) − F ( u(t), V (t) )] ≤ −γuτ
4 ∆(t)u −
γvτm
4n ∆(t)v
+ Lu(1 + χ
2)γ2uτ 2
2
( σ2u + 12δ2
m (1−m/n)
) + Lv(1 + χ 2)γ2vτ 2σ2vm
2n
+ 2
n n∑ i=1 τ−1∑ k=0 Et ∥∥∥u(t)i,k − u(t)∥∥∥2 (L2uγu + mn χ2LuLvγv) + 2
n n∑ i=1 τ−1∑ k=0 Et ∥∥∥v(t)i,k − v(t)∥∥∥2 (mn L2vγv + χ2LuLvγu) . (11)
Note that we simplified some constants appearing on the gradient norm terms using
γu ≤ ( 12Lu(1 + χ 2)(1 + ρ2)τ )−1 and γv ≤ ( 6Lv(1 + χ 2)τ )−1 .
Our next step is to bound the last two lines of (11) with Lemma 8 and invoke the gradient diversity assumption (Assumption 3′) as
1
n n∑ i=1 ∥∥∥∇uFi(u(t), v(t)i )∥∥∥2 ≤ δ2 + (1 + ρ2)∥∥∥∇uF (u(t), V (t))∥∥∥2 . This gives, after plugging in the learning rates and further simplifying the constants,
Et
[ F ( u(t+1), V (t+1) ) − F ( u(t), V (t) )] ≤− c∆ (t) u
8Lu − cm∆
(t) v
8Lvn + c2(1 + χ2)
( σ2u
2Lu + mσ2v nLv + 6δ2 Lum
( 1− m
n )) + c3(1 + χ2)(1− τ−1) ( 24δ2
Lu + 4σ2u Lu + 4σ2v Lu
) .
Taking full expectation, telescoping the series over t = 0, · · · , T − 1 and rearranging the resulting terms give the desired bound in Theorem 1.
Claim 4 (Bounding T1,v). Let T1,v be defined as in (10). We have,
Et[T1,v] ≤ − γvτm
2n2 n∑ i=1 ∥∥∥∇vFi (u(t), v(t)i )∥∥∥2 + γvm
n n∑ i=1 τ−1∑ k=0 Et [ χ2LuLv ∥∥∥ũ(t)i,k − u(t)∥∥∥2 + L2v∥∥∥ṽ(t)i,k − v(t)i ∥∥∥2] .
Proof. Define T1,v,i to be contribution of the ith term to T1,v. For i /∈ St, we have that T1,v,i = 0, since v(t+1)i = v (t) i . On the other hand, for i ∈ S(t), we use the unbiasedness of the gradient estimator
Gi,v and the independence of z (t) i,k from u (t) i,k, v (t) i,k to get
Et [T1,v,i] = −γv τ−1∑ k=0 Et 〈 ∇vFi ( u(t), v (t) i ) ,∇vFi ( u (t) i,k, v (t) i,k )〉 = −γv
τ−1∑ k=0 Et 〈 ∇vFi ( u(t), v (t) i ) ,∇vFi ( ũ (t) i,k, ṽ (t) i,k )〉 =− γvτ
∥∥∥∇vFi (u(t), v(t)i )∥∥∥2 − γv
τ−1∑ k=0 Et 〈 ∇vFi ( u(t), v (t) i ) ,∇vFi ( ũ (t) i,k, ṽ (t) i,k ) −∇vFi ( u(t), v (t) i )〉 ≤ −γvτ
2 ∥∥∥∇vFi (u(t), v(t)i )∥∥∥2 + γv2 τ−1∑ k=0 Et ∥∥∥∇vFi (ũ(t)i,k, ṽ(t)i,k)−∇vFi (u(t), v(t)i )∥∥∥2 . (12) For the second term, we add and subtract∇vFi ( u(t), ṽ (t) i,k ) and use smoothness to get
∥∥∥∇vFi (ũ(t)i,k, ṽ(t)i,k)−∇vFi (u(t), v(t)i )∥∥∥2 ≤ 2χ2LuLv∥∥∥ũ(t)i,k − u(t)∥∥∥2 + 2L2v∥∥∥ṽ(t)i,k − v(t)i ∥∥∥2 . (13)
Since the right hand side of this bound is independent of St, we get,
Et[T1,v] = m
n Et 1 m ∑ i∈S(t) T1,v,i = m n2 n∑ i=1 Et[T1,v,i] ,
and plugging in (12) and (13) completes the proof.
Claim 5 (Bounding T1,u). Consider T1,u defined in (10). We have the bound,
Et[T1,u] ≤ − γuτ
2 ∥∥∥∇uF (u(t), V (t))∥∥∥2 + γu n n∑ i=1 τ−1∑ k=0 Et [ L2u ∥∥∥ũ(t)i,k − u(t)∥∥∥2 + χ2LuLv∥∥∥ṽ(t)i,k − v(t)i ∥∥∥2] .
Proof. Due to the independence of S(t) from ũ(t)i,k, ṽ (t) i,k, we have,
Et
[ u(t+1) − u(t) ] = −γuEt 1 m ∑ i∈S(t) τ−1∑ k=0 ∇uFi ( u (t) i,k, v (t) i,k ) = −γuEt 1 m ∑ i∈S(t) τ−1∑ k=0 ∇uFi ( ũ (t) i,k, ṽ (t) i,k
) = −γu
n n∑ i=1 τ−1∑ k=0 Et [ ∇uFi ( ũ (t) i,k, ṽ (t) i,k )] ,
where the last equality took an expectation over S(t), which is independent of ũ(t)i,k, ṽ (t) i,k. Now, using the same sequence of arguments as Claim 4, we have,
Et 〈 ∇uF ( u(t), V (t) ) , u(t+1) − u(t) 〉 = −γu
τ−1∑ k=0 Et
〈 ∇uF ( u(t), V (t) ) , 1
n n∑ i=1 ∇uFi ( ũ (t) i,k, ṽ (t) i,k
)〉
≤ −γuτ 2 ∥∥∥∇uF (u(t), V (t))∥∥∥2 + γu 2 τ−1∑ k=0 Et ∥∥∥∥∥ 1n n∑ i=1 ∇uFi ( ũ (t) i,k, ṽ (t) i,k ) −∇uF ( u(t), V (t) )∥∥∥∥∥ 2
(∗) ≤ −γuτ
2 ∥∥∥∇uF (u(t), V (t))∥∥∥2 + γu 2n n∑ i=1 τ−1∑ k=0 Et ∥∥∥∇uFi (ũ(t)i,k, ṽ(t)i,k)−∇uFi (u(t), v(t)i )∥∥∥2 ≤ −γuτ
2 ∥∥∥∇uF (u(t), V (t))∥∥∥2 + γu n n∑ i=1 τ−1∑ k=0 Et [ L2u ∥∥∥ũ(t)i,k − u(t)∥∥∥2 + L2uv∥∥∥ṽ(t)i,k − v(t)i ∥∥∥2] , where the inequality (∗) follows from Jensen’s inequality as∥∥∥∥∥ 1n n∑ i=1 ∇uFi ( ũ (t) i,k, ṽ (t) i,k ) −∇uF ( u(t), V (t) )∥∥∥∥∥ 2 ≤ 1 n n∑ i=1 ∥∥∥∇uFi (ũ(t)i,k, ṽ(t)i,k)−∇uFi (u(t)i,k, v(t))∥∥∥2 .
Claim 6 (Bounding T2,v). Consider T2,v as defined in (10). We have the bound,
Et[T2,v] ≤ 3Lv(1 + χ
2)γ2vτ 2m
2n2
n∑ i=1 ∥∥∥∇vFi (u(t), v(t)i )∥∥∥2 + Lv(1 + χ2)γ2vτ2mσ2v2n + 3Lv(1 + χ 2)γ2vτm
2n2
n∑ i=1 τ−1∑ k=0 Et [ L2v ∥∥∥ṽ(t)i,k − v(t)i ∥∥∥2 + χ2LuLv∥∥∥ũ(t)i,k − u(t)∥∥∥2] . Proof. We start with
Et ∥∥∥ṽ(t)k,τ − v(t)∥∥∥2 = γ2vEt ∥∥∥∥∥ τ−1∑ k=0 Gi,v ( ũ (t) i,k, ṽ (t) i,k, z (t) i,k )∥∥∥∥∥ 2
≤ γ2vτ τ−1∑ k=0 Et ∥∥∥Gi,v (ũ(t)i,k, ṽ(t)i,k, z(t)i,k)∥∥∥2 ≤ γ2vτ2σ2v + γ2vτ
τ−1∑ k=0 Et ∥∥∥∇vFi (ũ(t)i,k, ṽ(t)i,k)∥∥∥2 ≤ γ2vτ2σ2v + 3γ2vτ2
∥∥∥∇vFi (u(t), v(t)i )∥∥∥2 + 3γ2vτ
τ−1∑ k=0 Et [ L2v ∥∥∥ṽ(t)i,k − v(t)i ∥∥∥2 + χ2LuLv∥∥∥ũ(t)i,k − u(t)∥∥∥2] . Using (a) v(t+1)i = ṽ (t) i,τ for i ∈ S(t), and, (b) S(t) is independent from ũ (t) i,k, ṽ (t) i,k, we get,
Et[T2,v] = Lv(1 + χ
2)m
2n Et 1 m ∑ i∈S(t) ∥∥∥ṽ(t)i,τ − v(t)i ∥∥∥2
≤ Lv(1 + χ 2)m
2n2
n∑ i=1 Et ∥∥∥ṽ(t)i,τ − v(t)i ∥∥∥2 Plugging in the bound Et ∥∥∥ṽ(t)i,τ − v(t)∥∥∥2 completes the proof.
Claim 7 (Bounding T2,u). Consider T2,u as defined in (10). We have,
Et[T2,u] ≤ Lu(1 + χ
2)γ2uτ 2
2m
( σ2u + 12δ 2 (
1− m n )) + 3Lu(1 + χ 2)γ2uτ 2(1 + ρ2)
∥∥∥∇uFi (u(t), V (t))∥∥∥2 + 3Lu(1 + χ 2)γ2uτ
2n
n∑ i=1 τ−1∑ k=0 Et [ L2u ∥∥∥ũ(t)i,k − u(t)∥∥∥2 + χ2LuLv∥∥∥ṽ(t)i,k − v(t)i ∥∥∥2] . Proof. We proceed with the first two inequalities as in the proof of Claim 6 to get
Et ∥∥∥u(t+1) − u(t)∥∥∥2 ≤ γ2uτ2σ2u m + γ2uτ τ−1∑ k=0 Et ∥∥∥∥∥∥ 1m ∑ i∈S(t) ∇uFi ( ũ (t) i,k, ṽ (t) i,k )∥∥∥∥∥∥ 2
︸ ︷︷ ︸ =:T3,j
.
For T3,j , (a) we add and subtract ∇uF (u(t), V (t)) and ∇uFi(u(t), ṽ(t)i,k), (b) invoke the squared triangle inequality, and, (c) use smoothness to get
T3,j = 6Et ∥∥∥∥∥∥ 1m ∑ i∈S(t) ∇uFi ( u(t), v (t) i ) −∇uF ( u(t), V (t) )∥∥∥∥∥∥ 2 + 6 ∥∥∥∇uF (u(t), V (t))∥∥∥2
+ 3Et 1 m ∑ i∈S(t) ( L2u ∥∥∥ũ(t)i,k − u(t)∥∥∥2 + χ2LuLv∥∥∥ṽ(t)i,k − v(t)i ∥∥∥2)
For the first term, we use the fact that S(t) is obtained by sampling without replacement to apply Lemma 17 together with the gradient diversity assumption to get
Et ∥∥∥∥∥∥ 1m ∑ i∈S(t) ∇uFi ( u(t), v (t) i ) −∇uF ( u(t), V (t) )∥∥∥∥∥∥ 2
≤ 1 m ( n−m n− 1 ) 1 n n∑ i=1 ∥∥∥∇uFi (u(t), v(t)i )−∇uF (u(t), V (t))∥∥∥2 ≤ 1 m ( n−m n− 1 )( δ2 + ρ2
∥∥∥∇uF (u(t), V (t))∥∥∥2) . Therefore,
T3,j = 12δ2
m
( 1− m
n
) + 6(1 + ρ2) ∥∥∥∇uF (u(t), V (t))∥∥∥2 + 3
n n∑ i=1 Et [ L2u ∥∥∥ũ(t)i,k − u(t)∥∥∥2 + χ2LuLv∥∥∥ṽ(t)i,k − v(t)i ∥∥∥2] , where we also used the independence between S(t) and (ũ(t)i,k, ṽ (t) i,k). Plugging this into the expression for Et‖u(t+1) − u(t)‖2 completes the proof.
Lemma 8. Let Fi satisfy Assumptions 1′-3′, and consider the iterates
uk+1 = uk − γuGi,u(uk, vk, zk) , and, vk+1 = vk − γvGi,v(uk, vk, zk) ,
for k = 0, · · · , τ − 1, where zk ∼ Di. Suppose the learning rates satisfy γu = cu/(τLu) and γv = cv/(τLv) with cu, cv ≤ 1/ √ 6 max{1, χ−2}. Further, define,
A = γuL 2 u + fχ 2γvLuLv , and, B = fγvL2v + χ 2γuLuLv ,
where f ∈ (0, 1] is given. Then, we have the bound,
τ−1∑ k=0 E [ A‖uk − u0‖2+B‖vk − u0‖2 ] ≤ 4τ2(τ − 1) ( γ2uσ 2 uA+ γ 2 vσ 2 vB )
+ 12τ2(τ − 1) ( γ2uA‖∇uFi(u0, v0)‖2 + γ2vB‖∇vFi(u0, v0)‖2 ) .
Proof. If τ = 1, there is nothing to prove, so we assume τ > 1. Let ∆k := A‖uk − u0‖2 +B‖vk − v0‖2 and denote by Fk the sigma-algebra generated by (wk, vk). Further, let Ek[·] = E[·|Fk]. We use the inequality 2αβ ≤ α2/δ2 + δ2β2 for reals α, β, δ to get,
Ek‖uk+1 − u0‖2 ≤ ( 1 + 1
τ − 1
) ‖uk − u0‖2 + τγ2uEk‖Gi,u(uk, vk, zk)‖ 2
≤ ( 1 + 1
τ − 1
) ‖uk − u0‖2 + τγ2uσ2u + τγ2u‖∇uFi(uk, vk)‖ 2
≤ ( 1 + 1
τ − 1
) ‖uk − u0‖2 + τγ2uσ2u + 3τγ2u‖∇uFi(u0, v0)‖ 2
+ 3τγ2uL 2 u‖uk − u0‖2 + 3τγ2uLuv‖vk − v0‖2 ,
where the last inequality followed from the squared triangle inequality (from adding and subtracting ∇uFi(u0, vk) and∇uFi(u0, v0)) followed by smoothness. Together with the analogous inequality for the v-update, we get,
Ek[∆k+1] ≤ ( 1 + 1
τ − 1
) ∆k +A ′‖uk − u0‖2 +B′‖vk − v0‖2 + C ,
where we have
A′ = 3τ(γ2uL 2 uA+ γ 2 vχ 2LuLvB), and, B′ = 3τ(γ2vL 2 vB + γ 2 uχ 2LuLvA) and,
C ′ = τγ2uσ 2 uA+ τγ 2 vσ 2 vB + 3τγ 2 uA‖∇uFi(u0, v0)‖2 + 3τγ2vB‖∇vFi(u0, v0)‖2 .
Next, we apply Lemma 20 to get that A′ ≤ A/τ and B′ ≤ B/τ under the assumed conditions on the learning rates; this allows us to write the right hand side completely in terms of ∆k and unroll the recurrence. The intuition behind Lemma 20 is as follows. Ignoring the dependence on τ, Lu, Lv, χ for a moment, if γu and γv are both O(η), then A′, B′ are both O(η3), while A and B are O(η). Thus, making η small enough should suffice to get A′ ≤ O(A) and B′ ≤ O(B). Concretely, Lemma 20 gives
Ek[∆k+1] ≤ ( 1 + 2
τ − 1
) E[∆k] + C ,
and unrolling this recurrence gives for k ≤ τ − 1
E[∆k] ≤ k−1∑ j=0 ( 1 + 2 τ − 1 )j C ≤ τ − 1 2 ( 1 + 2 τ − 1 )k C
≤ τ − 1 2
( 1 + 2
τ − 1
)τ−1 C ≤ e 2
2 (τ − 1)C ,
where we used (1 + 1/α)α ≤ e for all α > 0. Summing over k and using the numerical bound e2 < 8 completes the proof.
Remark 9. We only invoked the partial gradient diversity assumption (Assumption 3) at iterates (u(t), V (t)); therefore, it suffices if the assumption only holds at iterates (u(t), V (t)) generated by FedSim, rather than at all (u, V ).
Algorithm 5 FedAlt: Alternating updates of shared and personalized parameters
Input: Initial iterates u(0), V (0), Number of communication rounds T , Number of devices per round m, Number of local updates τu, τv , Local step sizes γu, γv ,
1: for t = 0, 1, · · · , T − 1 do 2: Sample m devices from [n] without replacement in S(t) 3: for each selected device i ∈ S(t) in parallel do 4: Initialize v(t)i,0 = v (t) i 5: for k = 0, · · · , τv − 1 do . Update personalized parameters 6: Sample data z(t)i,k ∼ Di 7: v(t)i,k+1 = v (t) i,k − γvGi,v(u(t), v (t) i,k, z (t) i,k) 8: Update v(t+1)i = v (t) k,τv 9: Initialize u(t)i,0 = u (t)
10: for k = 0, · · · , τu − 1 do . Update shared parameters 11: u(t)i,k+1 = u (t) i,k − γuGi,u(u (t) i,k, v (t+1) i , z (t) i,k) 12: Update u(t+1)i = u (t) i,τu
13: Update u(t+1) = ∑ i∈S(t) αiu (t+1) i∑
i∈S(t) αi at the server with secure aggregation
14: return u(T ), v(T )1 , · · · , v (T ) n
A.3 CONVERGENCE ANALYSIS OF FEDALT
We give the full form of FedAlt in Algorithms 5 for the general case of unequal αi’s but focus on αi = 1/n for the analysis. For convenience, we reiterate Theorem 2 below. Recall the definitions
∆(t)u = ∥∥∥∇uF (u(t), V (t+1))∥∥∥2 , and, ∆(t)v = 1n n∑ i=1 ∥∥∥∇vFi (u(t), v(t)i )∥∥∥2 . Theorem 2 (Convergence of FedAlt). Suppose Assumptions 1, 2 and 3 hold and the learning rates in FedAlt are chosen as γu = η/(Luτu) and γv = η/(Lvτv), with
η ≤ min {
1 24(1+ρ2) , m 128χ2(n−m) , √ m χ2n } .
Then, ignoring absolute constants, we have
1 T ∑T−1 t=0 ( 1 Lu E [ ∆ (t) u ] + mnLv E [ ∆ (t) v ]) ≤ ∆F0
ηT + η
( σ2u+δ
2(1−mn ) mLu + σ2v Lv m+χ2(n−m) n ) + η2 ( σ2u+δ 2
Lu (1− τ−1u ) + σ2vm Lvn (1− τ−1v ) + χ2σ2v Lv
) .
Before proving the theorem, we have the corollary with optimized learning rates.
Corollary 10. Consider the setting of Theorem 2 and fix some ε > 0. Suppose we set γu = η/(τLu) and γv = η/(τLv) such that, ignoring absolute constants,
η = ( σ2v εLv (m n + χ2(1−m/n) ))−1∧(σ2u + δ2(1−m/n) mLuε )−1∧(σ2u + δ2 Luε (1− τ−1u ) )−1/2
∧( σ2vm Lvnε (1− τ−1v ) )−1/2∧ 1 1 + ρ2 ∧ m χ2(n−m) ∧√ m χ2n .
Then, we have,
1
T T−1∑ t=0
( 1 Lu E ∥∥∥∇uF (u(t), Ṽ (t))∥∥∥2 + m Lvn2 n∑ i=1 E ∥∥∥∇vFi (u(t), v(t)i )∥∥∥2 ) ≤ ε
after T communication rounds, where, ignoring absolute constants,
T ≤ ∆F0 ε2
( σ2u + δ 2 ( 1− mn ) mLu + σ2v Lv (m n + χ2 ( 1− m n )))
+ ∆F0 ε3/2 ( σu + δ√ Lu √ 1− τ−1u + σv√ Lv √ 1− τ−1v ) +
∆F0 ε
( 1 + ρ2 + χ2 ( n m − 1 ) + √ χ2n m ) .
Proof. We get the bound by balancing terms from the bound of Theorem 2. The choice of η ensures that all the O(η) and O(η2) terms are at most O(ε). Finally, the smallest number of communication rounds to make the left hand side of the bound of Theorem 2 smaller than ε is ∆F0/(ηε).
We are now ready to prove Theorem 2.
Proof of Theorem 2. The proof mainly applies the smoothness upper bound to write out a descent condition with suitably small noise terms. We start with some notation.
We introduce the notation ∆̃(t)u as the analogue of ∆ (t) u with the virtual variable Ṽ (t+1):
∆̃(t)u = ∥∥∥∇uF (u(t), V (t))∥∥∥2 .
Notation. Let F (t) denote the σ-algebra generated by ( u(t), V (t) ) and denote Et[·] = E[·|F (t)]. For all devices, including those not selected in each round, we define virtual sequences ũ(t)i,k, ṽ (t) i,k as the SGD updates in Algorithm 5 for all devices regardless of whether they are selected. For the selected devices i ∈ S(t), we have v(t)i,k = ṽ (t) i,k and u (t) i,k = ũ (t) i,k. Note now that the random variables ũ (t) i,k, ṽ (t) i,k are independent of the device selection S(t). Finally, we have that the updates for the selected devices i ∈ S(t) are given by
v (t+1) i = v (t) i − γv τv−1∑ k=0 Gi,v ( u(t), ṽ (t) i,k, z (t) i,k ) ,
and the server update is given by
u(t+1) = u(t) − γu m ∑ i∈S(t) τu−1∑ k=0 Gi,u ( ũ (t) i,k, ṽ (t) i,τv , z (t) i,k ) .
Proof Outline and the Challenge of Dependent Random Variables. We start with F ( u(t+1), V (t+1) ) − F ( u(t), V (t) ) =F ( u(t), V (t+1) ) − F ( u(t), V (t) ) + F ( u(t+1), V (t+1) ) − F ( u(t), V (t+1) ) .
(14)
The first line corresponds to the effect of the v-step and the second line to the u-step. The former is easy to handle with standard techniques that rely on the smoothness of F ( u(t), · ) . The latter is more challenging. In particular, the smoothness bound for the u-step gives us
F ( u(t+1), V (t+1) ) − F ( u(t), V (t+1) ) ≤ 〈 ∇uF ( u(t), V (t+1) ) , u(t+1) − u(t) 〉 + Lu 2
∥∥∥u(t+1) − u(t)∥∥∥2 . The standard proofs of convergence of stochastic gradient methods rely on the fact that we can take an expectation w.r.t. the sampling S(t) of devices for the first order term. However, both V (t+1) and
u(t+1) depend on the sampling S(t) of devices. Therefore, we cannot directly take an expectation with respect to the sampling of devices in S(t).
Virtual Full Participation to Circumvent Dependent Random Variables. The crux of the proof lies in replacing V (t+1) in the analysis of the u-step with the virtual iterate Ṽ (t+1) so as to move all the dependence of the u-step on S(t) to the u(t+1) term. This allows us to take an expectation; it remains to carefully bound the resulting error terms.
Finally, we will arrive at a bound of | 1. What is the focus and contribution of the paper on personalized federated learning?
2. What are the strengths of the proposed approach, particularly in terms of partial model personalization and optimization algorithms?
3. What are the weaknesses of the paper, especially regarding the assumption of smoothness in the theoretical analysis?
4. Do you have any concerns about the limitation of the proposed method in real-world scenarios?
5. How do the experimental results demonstrate the effectiveness of the proposed algorithm compared to other personalization methods? | Summary Of The Paper
Review | Summary Of The Paper
This paper proposed a personalized FL framework with partial model personalization. It separates the model parameters into two parts, shared model and personalized model, and optimize them in an interleaving manner. The authors proposed two optimization algorithm, named as FedSim and FedAlt, with partial clients participation. They also analyzed the algorithms' convergence rate on general smooth nonconvex function. In experiments, they consider 3 different model split methods: 1. input layer as personalized model, the rest as shared model; 2. output layer as personalized model, the rest as shared model; 3. adding adapter as personalized model. The experiments conduct on NLP or vision tasks also demonstrate that the proposed algorithm outperforms other personalization method.
Review
Pros:
The authors proposed a unified framework for personalized federated learning. To my best knowledge, they give the first analysis of this framework and algorithm on smooth nonconvex function.
The proposed algorithm can outperform existing personalization algorithms on NLP or Vision tasks.
Cons:
The personalized layer idea is not new, as authors mentioned, it has been proposed in prior works [Arivazhagan et al. (2019) and Collins et al. (2021)].
I think the established theory does not perfectly fit with experiment model, since they assume smoothness in theory, but in practice the motivated example is usually non-smooth ReLU network. Hence I think it will be great if they can perform analysis on ReLU network, perhaps very simple one layer setting. |
ICLR | Title
Federated Learning with Partial Model Personalization
Abstract
We propose and analyze a general framework of federated learning with partial model personalization. Compared with full model personalization, partial model personalization relies on domain knowledge to select a small portion of the model to personalize, thus imposing a much smaller on-device memory footprint. We propose two federated optimization algorithms for training partially personalized models, where the shared and personal parameters are updated either simultaneously or alternately on each device, but only the shared parameters are communicated and aggregated at the server. We give convergence analyses of both algorithms for minimizing smooth nonconvex functions, providing theoretical support of them for training deep learning models. Our experiments on real-world image and text datasets demonstrate that (a) partial model personalization can obtain most of the benefit of full model personalization with a small fraction of personalized parameters, and, (b) the alternating update algorithm often outperforms the simultaneous update algorithm.
1 INTRODUCTION
Federated Learning (McMahan et al., 2017) has emerged as a powerful paradigm for distributed and privacy-preserving machine learning over a large number of edge devices (see Kairouz et al., 2021, and references therein). We consider a typical setting of Federated Learning (FL) with n devices (also called clients), where each device i has a training dataset of Ni samples zi,1, · · · , zi,Ni . Let w ∈ Rd represent the parameters of a (supervised) learning model and fi(w, zi,j) be the loss of the model on the training example zi,j . Then the loss function associated with device i is Fi(w) = (1/Ni) ∑Ni j=1 fi(w, zi,j). A common objective of FL is to find model parameters that minimize the weighted average loss across all devices (without transferring the datasets):
minimize w n∑ i=1 αiFi(w), (1)
where weights αi are nonnegative and satisfy ∑n i=1 αi = 1. A common practice is to choose the
weights as αi = Ni/N where N = ∑n k=1Nk, which corresponds to minimizing the unweighted
average loss across all samples from the n devices: (1/N) ∑n i=1 ∑Ni j=1 fi(w, zi,j).
The main motivation for minimizing the average loss over all devices is to leverage their collective statistical power for better generalization, because the amount of data on each device can be very limited. This is especially important for training modern deep learning models with large number of parameters. However, this argument assumes that the datasets from different devices are sampled from the same, or at least very similar, distributions. Given the diverse characteristics of the users and increasing trend of personalized on-device services, such an i.i.d. assumption may not hold in practice. Thus, the one-model-fits-all formulation in (1) can be less effective and even undesirable.
Several approaches have been proposed for personalized FL, including ones based on multi-task learning (Smith et al., 2017), meta learning (Fallah et al., 2020), and proximal methods (Dinh et al., 2020; Li et al., 2021). A simple formulation that captures their main idea is
minimize w0,{wi}ni=1 n∑ i=1 αi ( Fi(wi) + λi 2 ‖wi − w0‖2 ) , (2)
where wi for i = 1, . . . , n are personalized model parameters at the devices, w0 is a reference model maintained by the server, and the λi’s are regularization weights that control the extent of personalization. A major disadvantage of the formulation (2), which we call full model personalization, is that it requires twice the memory footprint of the model, wi and w0 at each device, which severely limits the size of trainable models. On the other hand, the flexibility of full model personalization can be unnecessary. Modern deep learning models are composed of many simple functional units and are typically organized into layers or a more general interconnected architecture. Personalizing the “right” components, selected with domain knowledge, may result in a substantial benefit with only a small increase in memory footprint. In addition, partial model personalization can be less susceptible to “catastrophic forgetting” (McCloskey & Cohen, 1989), where a large model finetuned on a small local dataset forgets the original (non-personalized) task, leading to a degradation of test performance.
We propose a framework for FL with partial model personalization. Specifically, we partition the model parameters into two groups: the shared parameters u ∈ Rd0 and the personal parameters vi ∈ Rdi for i = 1, . . . , n. The full model on device i is denoted as wi = (u, vi), and the local loss function is Fi(u, vi) = (1/Ni) ∑Ni i=1 fi ( (u, vi), zi,j ) . Our goal is to solve the optimization problem
minimize u, {vi}ni=1 n∑ i=1 αiFi(u, vi). (3)
Notice that the dimensions of vi can be different across the devices, allowing the personal components of the model to have different number of parameters or even different architecture.
We investigate two FL algorithms for solving problem (3): FedSim, a simultaneous update algorithm and FedAlt, an alternating update algorithm. Both algorithms follow the standard FL protocol. During each round, the server randomly selects a subset of the devices for update and broadcasts the current global version of the shared parameters to devices in the subset. Each selected device then performs one or more steps of (stochastic) gradient descent to update both the shared parameters and the personal parameters, and sends the updated shared parameters to the server for aggregation. The updated personal parameters are kept local at the device to serve as the initial states when the device is selected for another update. In FedSim, the shared and personal parameters are updated simultaneously during each local iteration. In FedAlt, the devices first update the personal parameters with the received shared parameters fixed and then update the shared parameters with the new personal parameters fixed. We provide convergence analysis and empirical evaluation of both methods.
The main contributions of this paper are summarized as follows:
• We propose a general framework of FL with partial model personalization, which relies on domain knowledge to select a small portion of the model to personalize, thus imposing a much smaller memory footprint on the devices than full model personalization. This framework unifies existing work on personalized FL and allows arbitrary partitioning of deep learning models.
• We provide convergence guarantees for the FedSim and FedAlt methods in the general (smooth) nonconvex setting. While both methods have appeared in the literature previously, they are either used without convergence analysis or with results on limited settings (assuming convexity or full participation) Our analysis provides theoretical support for the general nonconvex setting with partial participation. The analysis of FedAlt with partial participation is especially challenging and we develop a novel technique of virtual full participation to overcome the difficulties.
• We conduct extensive experiments on image classification and text prediction tasks, exploring different model personalization strategies for each task, and comparing with several strong baselines. Our results demonstrate that partial model personalization can obtain most of the benefit of full model personalization with a small fraction of personalized parameters, and FedAlt often outperforms FedSim.
• Our experiments also reveal that personalization (full or partial) may lead to worse performance for some devices, despite improving the average. Typical forms of regularization such as weight decay and dropout do not mitigate this issue. This phenomenon has been overlooked in previous work and calls for future research to improve both performance and fairness.
Related work. Specific forms of partial model personalization have been considered in previous works. Liang et al. (2019) propose to personalize the input layers to learn a personalized representation per-device (Figure 1a), while Arivazhagan et al. (2019) and Collins et al. (2021) propose to personalize the output layer while learning a shared representation with the input layers (Figure 1b). Both FedSim and FedAlt have appeared in the literature before, but the scope of their convergence analysis is limited. Specifically, Liang et al. (2019), Arivazhagan et al. (2019) and Hanzely et al. (2021) use FedSim, while Collins et al. (2021) and Singhal et al. (2021) proposed variants of FedAlt. Notably, Hanzely et al. (2021) establish convergence of FedSim with full device participation in the convex and non-convex cases, while Collins et al. (2021) prove the linear convergence of FedAlt for a two-layer linear network where Fi(·, vi) and Fi(u, ·) are both convex for fixed vi and u respectively. We analyze both FedSim and FedAlt in the general nonconvex case with partial device participation, hence addressing a more general and practical setting.
While we primarily consider the problem (3) in the context of partial model personalization, it can serve as a general formulation that covers many other problems. Hanzely et al. (2021) demonstrate that various full model personalization formulations based on regularization (Dinh et al., 2020; Li et al., 2021), including (2), as well as interpolation (Deng et al., 2020; Mansour et al., 2020) are special cases of this problem. The rates of convergence we prove in §3 are competitive with or better than those in previous works for full model personalization methods in the non-convex case.
2 PARTIALLY PERSONALIZED MODELS
Modern deep learning models all have a multi-layer architecture. While a complete understanding of why they work so well is still out of reach, a general insight is that the lower layers (close to the input) are mostly responsible for feature extraction and the upper layers (close to the output) focus on complex pattern recognition. Depending on the application scenarios and domain knowledge, we may personalize either the input layer(s) or the output layer(s) of the model; see Figure 1.
In Figure 1c, the input layers are split horizontally into two parts, one shared and the other personal. They process different chunks of the input vector and their outputs are concatenated before feeding
Algorithm 1 Federated Learning with Partial Model Personalization (FedSim / FedAlt)
Input: initial states u(0), {v(0)i }ni=1, number of rounds T , number of devices per round m 1: for t = 0, 1, · · · , T − 1 do 2: server randomly samples m devices as S(t) ⊂ {1, . . . , n} 3: server broadcasts u(t) to each device in S(t) 4: for each device i ∈ S(t) in parallel, do 5: ( u
(t+1) i , v (t+1) i
) = LocalSim / LocalAlt ( u(t), v
(t) i
) . v
(t+1) i = v (t) i if i /∈ S(t)
6: send u(t+1)i back to server 7: server updates u(t+1) = 1m ∑ i∈S(t) u (t+1) i
to the upper layers of the model. As demonstrated in (Bui et al., 2019), this partitioning can help protect user-specific private features (input 2 in Figure 1c) as the corresponding feature embedding (through vi) are personalized and kept local at the device. Similar architectures have also been proposed in context-dependent language models (e.g., Mikolov & Zweig, 2012).
A more structured partitioning is illustrated in Figure 2a, where a typical transformer layer (Vaswani et al., 2017) is augmented with two adapters. This architecture is proposed by Houlsby et al. (2019) for finetuning large language models. Similar residual adapter modules are proposed by Rebuffi et al. (2017) for image classification models in the context of multi-task learning. In the context of FL, we treat the adapter parameters as personal and the rest of the model parameters as shared.
Figure 2b shows a generalized additive model, where the outputs of two separate models, one shared and the other personalized, are fused to generate a prediction. Suppose the shared model is h(u, ·) and the personal model is hi(vi, ·). For regression tasks with samples zi,j = (xi,j , yi,j), where xi,j is the input and yi,j ∈ Rp is the output, we let Fi(u, vi) = (1/Ni) ∑Ni j=1 fi ( (u, vi), zi,j ) with
fi ( (u, vi), zi,j ) = ‖yi,j − h(u, xi,j)− hi(vi, xi,j)‖2 .
In this special case, the personal model fits the residual of the shared model and vice-versa (Agarwal et al., 2020). For classification tasks, h(u, ·) and hi(vi, ·) produce probability distributions over multiple classes. We can use the cross-entropy loss between yi,j and a convex combination of the two model outputs: θh(u, xi,j) + (1− θ)hi(vi, xi,j), where θ ∈ (0, 1) is a learnable parameter. Finally, we can cast the formulation (2) of full model personalization as a special case of (3) by letting
u← w0, vi ← wi, Fi(u, vi)← Fi(vi) + (λi/2)‖vi − u‖2. Many other formulations of full personalization can be reduced to (3); see Hanzely et al. (2021).
3 ALGORITHMS AND CONVERGENCE ANALYSIS
In this section, we present and analyze two FL algorithms for solving problem (3). To simplify presentation, we denote V = (v1, . . . , vn) ∈ Rd1+...+dn and focus on the case of αi = 1/n, i.e.,
minimizeu, V F (u, V ) := 1 n ∑n i=1 Fi(u, vi). (4)
This is equivalent to (3) if we scale Fi by nαi, thus does not lose generality. Moreover, we consider more general local functions Fi(u, vi) = Ez∼Di [fi((u, vi), z)], where Di is the local distribution. The FedSim and FedAlt algorithms share a common outer-loop description given in Algorithm 1. They differ only in the local update procedures LocalSim and LocalAlt, which are given in Algorithm 2 and Algorithm 3 respectively. In the two local update procedures, ∇̃u and ∇̃v represent stochastic gradients with respect to w and vi respectively. In LocalSim (Algorithm 2), the personal variables vi and local version of the shared parameters ui are updated simultaneously, with their (stochastic) partial gradients evaluated at the same point. In LocalAlt (Algorithm 3), the personal parameters are updated first with the received shared parameters fixed, then the shared parameters are updated with the new personal parameters fixed. They are analogous to the classical Jacobi update and Gauss-Seidel update in numerical linear algebra (e.g., Demmel, 1997, §6.5).
In order to analyze the convergence of the two algorithms, we make the following assumptions.
Algorithm 2 LocalSim ( u, vi ) Input: number of steps τ , step sizes γv and γu
1: initialize vi,0 = vi 2: initialize ui,0 = u 3: for k = 0, 1, · · · , τ − 1 do 4: vi,k+1 = vi,k − γv∇̃vFi ( ui,k, vi,k
) 5: ui,k+1 = ui,k − γu∇̃uFi ( ui,k, vi,k
) 6: update v+i = vi,τ 7: update u+i = ui,τ 8: return ( u+i , v + i )
Algorithm 3 LocalAlt ( u, vi ) Input: number of steps τv, τu, step sizes γv, γu
1: initialize vi,0 = vi 2: for k = 0, 1, · · · , τv−1 do 3: vi,k+1 = vi,k − γv∇̃vFi ( u, vi,k ) 4: update v+i = vi,τv and initialize ui,0 = u 5: for k = 0, 1, · · · , τu−1 do 6: ui,k+1 = ui,k − γu∇̃uFi ( ui,k, v + i
) 7: update u+i = ui,τu 8: return ( u+i , v + i
) Assumption 1 (Smoothness). The function Fi is continuously differentiable for each i = 1, . . . , n, and there exist constants Lu, Lv , Luv and Lvu such that for each i = 1, . . . , n, it holds that
• ∇uFi(u, vi) is Lu-Lipschitz with respect to u and Luv-Lipschitz with respect to vi; • ∇vFi(u, vi) is Lv-Lipschitz with respect to vi and Lvu-Lipschitz with respect to u.
Due to the definition of F (u, V ) in (4), it is easy to verify that∇uF (u, V ) has Lipschitz constant Lu with respect to u, Luv/ √ n with respect to V , and Luv/n with respect to any vi. We also define
χ := max{Luv, Lvu} /√
LuLv, (5) which measures the relative cross-sensitivity of ∇uFi with respect to vi and ∇vFi with respect to u. Assumption 2 (Bounded Variance). The stochastic gradients in Algorithm 2 and Algorithm 3 are unbiased and have bounded variance. That is, for all u and vi,
E [ ∇̃uFi(u, vi) ] = ∇uFi(u, vi), E [ ∇̃vFi(u, vi) ] = ∇vFi(u, vi) .
Furthermore, there exist constants σu and σv such that E [∥∥∇̃uFi(u, vi)−∇uFi(u, vi)∥∥2] ≤ σ2u , E[∥∥∇̃vFi(u, vi)−∇vFi(u, vi)∥∥2] ≤ σ2v .
We can view ∇uFi(u, vi), when i is randomly sampled from {1, . . . , n}, as a stochastic partial gradient of F (u, V ) with respect to u. The following assumption imposes a variance bound. Assumption 3 (Partial Gradient Diversity). There exist δ ≥ 0 and ρ ≥ 0 such that for all u and V ,
1 n ∑n i=1 ∥∥∇uFi(u, vi)−∇uF (u, V )∥∥2 ≤ δ2 + ρ2∥∥∇uF (u, V )∥∥2 . With ρ = 0, this assumption is similar to a constant variance bound on the stochastic gradient ∇uFi(u, vi); with ρ > 0, it allows the variance to grow with the norm of the full gradient.
Throughout this paper, we assume F is bounded below by F ? and denote ∆F0 = F ( u(0), V (0) ) −F ?. Further, we use shorthand V (t) = (v(t)1 , . . . , v (t) n ) and
∆ (t) u = ∥∥∇uF (u(t), V (t))∥∥2 , and ∆(t)v = 1n∑ni=1∥∥∇vFi(u(t), v(t)i )∥∥2 . For smooth and nonconvex loss functions Fi, we obtain convergence in expectation to a stationary point of F if the expected values of these two sequences converge to zero.
We first present our main result for FedSim (Algorithm 1 with LocalSim), proved in Appendix A.2. Theorem 1 (Convergence of FedSim). Suppose Assumptions 1, 2 and 3 hold and the learning rates in FedSim are chosen as γu = η/(Luτ) and γv = η/(Lvτ) with
η ≤ min {
1 12(1+χ2)(1+ρ2) ,
√ m/n
196(1−τ−1)(1+χ2)(1+ρ2)
} .
Then, ignoring absolute constants, we have
1 T ∑T−1 t=0 ( 1 Lu E [ ∆ (t) u ] + mnLv E [ ∆ (t) v ]) ≤ ∆F0ηT + η(1 + χ 2) ( σ2u+δ 2(1−mn ) mLu + mσ2v nLv ) + η2(1− τ−1)(1 + χ2) ( σ2u+δ 2
Lu + σ2v Lv
) . (6)
The left-hand side of (6) is the average over time of a weighted sum of E [ ∆ (t) u ] and E [ ∆ (t) v ] . The right-hand side contains three terms of order O(1/(ηT )), O(η) and O(η2) respectively. We can minimize the right-hand side by optimizing over η. By considering special cases such as σ2u = σ 2 v = 0 and m = n, some terms on the right-hand side disappear and we can obtain improved rates. Table 1 shows the results in several different regimes along with the optimal choices of η.
Challenge in Analyzing FedAlt. We now turn to FedAlt. Note that the personal parameters are updated only for the m selected devices in S(t) in each round t. Specifically,
v (t+1) i =
{ v
(t) i − γv ∑τv k=0 ∇̃vFi ( u(t), v (t) i,k ) if i ∈ S(t), v (t) i if i /∈ S(t).
Consequently, the vector V (t+1) of personal parameters depends on the random variable S(t). This makes it challenging to analyze the u-update steps of FedAlt because they are performed after V (t+1) is generated (as opposed to simultaneously in FedSim). When we take expectations with respect to the sampling of S(t) in analyzing the u-updates, V (t+1) becomes a dependent random variable, which prevents standard proof techniques from going through (see details in Appendix A.3).
We develop a novel technique called virtual full participation to overcome this challenge. Specifically, we define a virtual vector Ṽ (t+1), which is the result if every device were to perform local v-updates. It is independent of the sampling of S(t) and we can derive a convergence rate for related quantities. We carefully translate this rate from the virtual Ṽ (t+1) to the actual V (t) to get the following result. Theorem 2 (Convergence of FedAlt). Suppose Assumptions 1, 2 and 3 hold and the learning rates in FedAlt are chosen as γu = η/(Luτu) and γv = η/(Lvτv), with
η ≤ min {
1 24(1+ρ2) , m 128χ2(n−m) , √ m χ2n } .
Then, ignoring absolute constants, we have
1 T ∑T−1 t=0 ( 1 Lu E [ ∆ (t) u ] + mnLv E [ ∆ (t) v ]) ≤ ∆F0
ηT + η
( σ2u+δ
2(1−mn ) mLu + σ2v Lv m+χ2(n−m) n ) + η2 ( σ2u+δ 2
Lu (1− τ−1u ) + σ2vm Lvn (1− τ−1v ) + χ2σ2v Lv
) .
The proof of Theorem 2 is given in Appendix A.3. Similar to the results for FedSim, we can choose η to minimize the above upper bound to obtain the best convergence rate, as summarized in Table 1.
Comparing FedSim and FedAlt. Table 1 shows that both FedSim and FedAlt exhibit the standard O(1/ √ T ) rate in the general case. Comparing the constants in their rates, we identify two regimes in terms of problem parameters. The regime where FedAlt dominates FedSim is characterized by σ2v Lv ( 1− 2mn ) < σ2u+δ 2(1−m/n) mLu . A practically relevant scenario where this is true is σ2v ≈ 0 and σ2u ≈ 0 from using large or full batch on a small number of samples per device. Here, the rate of FedAlt is better than FedSim by a factor of (1 + χ2), indicating that the rate of FedAlt is less affected by the coupling between the personal and shared parameters. Our experiments in §4 corroborate the practical relevance of this regime.
The rates from Table 1 also apply for full personalization schemes without convergence guarantees in the nonconvex case (Agarwal et al., 2020; Mansour et al., 2020; Li et al., 2021). Our rates are better than those of (Dinh et al., 2020) for their pFedMe objective.
4 EXPERIMENTS
In this section, we experimentally compare different model personalization schemes using FedAlt and FedSim as well as no model personalization. Details about the experiments, hyperparameters and additional results are provided in the appendices. The code to reproduce the experimental results will be publicly released.
Datasets, Tasks and Models. We consider three learning tasks; they are summarized in Table 2.
(a) Next-Word Prediction: We use the StackOverflow dataset, where each device corresponds to the questions and answers of one user on stackoverflow.com. This task is representative of mobile keyboard predictions. We use a 4-layer transformer model (Vaswani et al., 2017).
(b) Visual Landmark Recognition: We use the GLDv2 dataset (Weyand et al., 2020; Hsu et al., 2020), a large-scale dataset with real images of global landmarks. Each device corresponds to a Wikipedia contributor who uploaded images. This task resembles a scenario where smartphone users capture images of landmarks while traveling. We use a ResNet-18 (He et al., 2016) model with group norm instead of batch norm (Hsieh et al., 2020) and images are reshaped to 224× 224.
(c) Character Recognition: We use the EMNIST dataset (Cohen et al., 2017), where the input is a 28 × 28 grayscale image of a handwritten character and the output is its label (0-9, a-z, A-Z). Each device corresponds to a writer of the character. We use a ResNet-18 model, with input and output layers modified to accommodate the smaller image size and number of classes.
All models are trained with the cross entropy loss and evaluated with top-1 accuracy of classification.
Model Partitioning for Partial Personalization. We consider three partitioning schemes.
(a) Input layer personalization: This architecture learns a personalized representation per-device by personalizing the input layer, while the rest of the model is shared (Figure 1a). For the transformer, we use the first transformer layer in place of the embedding layer.
(b) Output layer personalization: This architecture learns a shared representation but personalizes the prediction layer (Figure 1b). For the transformer model, we use the last transformer layer instead of the output layer.
(c) Adapter personalization: In this architecture, each device adds lightweight personalized adapter modules between specific layers of a shared model (Figure 2a). We use the transformer adapters of Houlsby et al. (2019) and for ResNet-18, the residual adapters of Rebuffi et al. (2017).
Algorithms and Experimental Pipeline. For full model personalization, we consider three baselines: (i) Finetune, where each device finetunes (using SGD locally) its personal full model starting from a learned common model, (ii) Ditto (Li et al., 2021), which is finetuning with `2 regularization, and, (iii) pFedMe (Dinh et al., 2020) which minimizes the objective (2). All methods, including FedSim, FedAlt and the baselines are initialized with a global model trained with FedAvg.
4.1 EXPERIMENTAL RESULTS
Partial personalization nearly matches full personalization and can sometimes outperform it. Table 3 shows the average test accuracy across all devices of different FL algorithms. We see that on the StackOverflow dataset, output layer personalization (25.05%) makes up nearly 90% of the gap between the non-personalized baseline (23.82%) and full personalization (25.21%). On EMNIST, adapter personalization exactly matches full personalization. Most surprisingly, on GLDv2, adapter personalization outperforms full personalization by 3.5pp (percentage points).
This success of adapter personalization can be explained partly by the nature of GLDv2. On average, the training data on each device contains 25 classes out of a possible 2028 while the testing data contains 10 classes not seen in its own training data. These unseen classes account for nearly 23% of all testing data. Personalizing the full model is susceptible to “forgetting” the original task (Kirkpatrick et al., 2017), making it harder to get these unseen classes right. Such catastrophic forgetting is worse when finetuning on a very small local dataset, as we often have in FL. On the other hand, personalizing the adapters does not suffer as much from this issue (Rebuffi et al., 2017).
Partial personalization only requires a fraction of the parameters to be personalized. Figure 3 shows that the number of personalized parameters required to compete with full model personalization is rather small. On StackOverflow, personalizing 1.2% of the parameters with adapters captures 72% of the accuracy boost from personalizing all 5.7M parameters; this can be improved to nearly 90% by personalizing 14% of the parameters (output layer). Likewise, we match full personalization on EMNIST and exceed it on GLDv2 with adapters, personalizing 11.5-12.5% of parameters.
The best personalized architecture is model and task dependent. Table 3 shows that personalizing the final transformer layer (denoted as “Output Layer”) achieves the best performance for StackOverflow, while the residual adapter achieves the best performance for GLDv2 and EMNIST. This shows that the approach of personalizing a fixed model part, as in several past works, is suboptimal. Our framework allows for the use of domain knowledge to determine customized personalization.
Finetuning is competitive with other full personalization methods. Full finetuning matches the performance of pFedMe and Ditto on StackOverflow and EMNIST. On GLDv2, however, pFedMe outperforms finetuning by 0.07pp, but is still 3.5pp worse than adapter personalization.
FedAlt outperforms FedSim for partial personalization. If the optimization problem (3) were convex, we would expect similar performance from FedAlt and FedSim. However, with nonconvex optimization problems such as the ones considered here, the choice of the optimization algorithm often affects the quality of the solution found. We see from Table 4 that FedAlt is almost always better than FedSim by a small margin, e.g., 0.08pp for StackOverflow/Adapter and 0.3pp for GLDv2/Input Layer. FedSim in turn yields a higher accuracy than simply finetuning the personalized part of the model, by a large margin, e.g., 0.12pp for StackOverflow/Output Layer and 2.55pp for GLDv2/Adapter.
4.2 EFFECTS OF PERSONALIZATION ON PER-DEVICE GENERALIZATION
Personalization hurts the test accuracy on some devices. Figure 4 shows the change in training and test accuracy of each device, compared with a non-personalized model trained by FedAvg. We see that personalization leads to an improvement in training accuracy across all devices, but a reduction in test accuracy on some of the devices over the non-personalized baseline. In particular, devices whose testing performance is hurt by personalization are mostly on the left side of the plot, meaning that they have relatively small number of training samples. On the other hand, many devices with the most improved test accuracy also appear on the left side, signaling the benefit of personalization. Therefore, there is a large variation of results for devices with few samples.
Additional experiments (see Appendix C) show that using `2 regularization, as in (2), or weight decay does not mitigate this issue. In particular, increasing regularization strength (less personalization) can reduce the spread of per-device accuracy, but only leads to a worse average accuracy that is close to using a common model. Other simple strategies such as dropout also do not fix this issue.
An ideal personalized method would boost performance on most of the devices without causing a reduction in (test) accuracy on any device. Realizing this goal calls for a sound statistical analysis for personalized FL and may require sophisticated methods for local performance diagnosis and more structured regularization. These are very promising directions for future research.
5 DISCUSSION
In addition to a much smaller memory footprint than full model personalization and being less susceptible to catastrophic forgetting, partial model personalization has other advantages. For example, it reduces the amount communication between the server and the devices because only the shared parameters are transmitted. While the communication saving may not be significant (especially when the personal parameters are only a small fraction of the full model), communicating only the shared parameters may have significant implications for privacy. Intuitively, it can be harder to infer private information from partial model information. This is especially the case if the more sensitive features of the data are processed through personal components of the model that are kept local at the devices. For example, we speculate that less noise needs to be added to the communicated parameters in order to satisfy differential privacy requirements (Abadi et al., 2016). This is a very promising direction for future research.
REPRODUCIBILITY STATEMENT
For theoretical results, we state and discuss the assumptions in Appendix A. The full proofs of all theoretical statements are also given there.
For our numerical results, we take multiple steps for reproducibility. First, we run each numerical experiment for five random seeds, and report both the mean and standard deviation over these runs. Second, we only use publicly available datasets and report the preprocessing at length in Appendix B. Third, we give the full list of hyperparameters used in our experiments in Table 8 in Appendix B. Finally, we will publicly release the code to reproduce the our experimental results.
ETHICS STATEMENT
The proposed framework for partial model personalization is immediately applicable for a range of practical federated learning applications in edge devices such as text prediction and speech recognition. One of key considerations of federated learning is privacy. Partial model personalization maintains the all the privacy benefits of current non-personalized federated learning systems. Indeed, our approach is compatible with techniques to enhance privacy such as differential privacy and secure aggregation. We also speculate that partial personalization has the potential for further reducing the privacy footprint — an investigation of this subject is beyond the scope of this work and is an interesting direction for future work.
On the flip side, we also observed in experiments that personalization (both full or partial) leads to a reduction in test performance on some of the devices. This has important implications for fairness, and calls for further research into the statistical aspects of personalization, performance diagnostics as well as more nuanced definitions of fairness in federated learning.
Appendix
Table of Contents
A Convergence Analysis: Full Proofs 14
A.1 Review of Setup and Assumptions . . . . . . . . . . . . . . . . . . . . . . . . 14 A.2 Convergence Analysis of FedSim . . . . . . . . . . . . . . . . . . . . . . . . . 15 A.3 Convergence Analysis of FedAlt . . . . . . . . . . . . . . . . . . . . . . . . . 22 A.4 Technical Lemmas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
B Experiments: Detailed Setup and Hyperparameters 31
B.1 Datasets, Tasks and Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 B.2 Experimental Pipeline and Baselines . . . . . . . . . . . . . . . . . . . . . . . 34 B.3 Hyperparameters and Evaluation Details . . . . . . . . . . . . . . . . . . . . . 35
C Experiments: Additional Results 36
C.1 Ablation: Final Finetuning for FedAlt and FedSim . . . . . . . . . . . . . . . . 36 C.2 Effect of Personalization on Per-Device Generalization . . . . . . . . . . . . . . 37 C.3 Partial Personalization for Stateless Devices . . . . . . . . . . . . . . . . . . . 38
A CONVERGENCE ANALYSIS: FULL PROOFS
We give the full convergence proofs here. The outline of this section is:
• §A.1: Review of setup and assumptions; • §A.2: Convergence analysis of FedSim and the full proof of Theorem 1; • §A.3: Convergence analysis of FedAlt and the full proof of Theorem 2; • §A.4: Technical lemmas used in the analysis.
A.1 REVIEW OF SETUP AND ASSUMPTIONS
We consider a federated learning system with n devices. Let the loss function on device i be Fi(u, vi), where u ∈ Rd0 denotes the shared parameters across all devices and vi ∈ Rdi denotes the personal parameters at device i. We aim to minimize the function
F (u, V ) := 1
n n∑ i=1 Fi(u, vi) , (7)
where V = (v1, · · · , vn) is a concatenation of all the personalized parameters. This is a special case of (3) with the equal per-device weights, i.e., αi = 1/n. Recall that we assume that F is bounded from below by F ?.
For convenience, we reiterate Assumptions 1, 2 and 3 from the main-paper as Assumptions 1′, 2′ and 3′ below respectively, with some additional comments and discussion. Assumption 1′ (Smoothness). For each device i = 1, . . . , n, the objective Fi is smooth, i.e., it is continuously differentiable and,
(a) u 7→ ∇uFi(u, vi) is Lu-Lipschitz for all vi, (b) vi 7→ ∇vFi(u, vi) is Lv-Lipschitz for all u, (c) vi 7→ ∇uFi(u, vi) is Luv-Lipschitz for all u, and, (d) u 7→ ∇vFi(u, vi) is Lvu-Lipschitz for all vi. Further, we assume for some χ > 0 that
max{Luv, Lvu} ≤ χ √ LuLv .
The smoothness assumption is a standard one. We can assume without loss of generality that the cross-Lipschitz coefficients Luv, Lvu are equal. Indeed, if Fi is twice continuously differentiable, we can show that Luv, Lvu are both equal to the operator norm ‖∇2uvFi(u, vi)‖op of the mixed second derivative matrix. Further, χ denotes the extent to which u impacts the gradient of vi and vice-versa.
Our next assumption is about the variance of the stochastic gradients, and is standard in literature. Compared to the main paper, we adopt a more precise notation about stochastic gradients. Assumption 2′ (Bounded Variance). Let Di denote a probability distribution over the data space Z on device i. There exist functions Gi,u and Gi,v which are unbiased estimates of ∇uFi and ∇vFi respectively. That is, for all u, vi:
Ez∼Di [Gi,u(u, v, z)] = ∇uFi(u, vi), and Ez∼Di [Gi,v(u, v, z)] = ∇vFi(u, vi) .
Furthermore, the variance of these estimators is at most σ2u and σ 2 v respectively. That is,
Ez∼Di‖Gi,u(u, v, z)−∇uFi(u, vi)‖ 2 ≤ σ2u ,
Ez∼Di‖Gi,v(u, v, z)−∇vFi(u, vi)‖ 2 ≤ σ2v .
In practice, one usually has Gi,u(u, vi, z) = ∇ufi((u, vi), z), which is the gradient of the loss on datapoint z ∼ Di under the model (u, vi), and similarly for Gi,v . Finally, we make a gradient diversity assumption. Assumption 3′ (Partial Gradient Diversity). There exist δ ≥ 0 and ρ ≥ 0 such that for all u and V ,
1
n n∑ i=1 ‖∇uFi(u, vi)−∇uF (u, V )‖2 ≤ δ2 + ρ2‖∇uF (u, V )‖2 . (8)
Algorithm 4 FedSim: Simultaneous update of shared and personal parameters
Input: Initial iterates u(0), V (0), Number of communication rounds T , Number of devices per round m, Number of local updates τ , Local step sizes γu, γv .
1: for t = 0, 1, · · · , T − 1 do 2: Sample m devices from [n] without replacement in S(t) 3: for each selected device i ∈ S(t) in parallel do 4: Initialize v(t)i,0 = v (t) i and u (t) i,0 = u (t) 5: for k = 0, · · · , τ − 1 do . Update all parameters jointly 6: Sample data z(t)i,k ∼ Di 7: v(t)i,k+1 = v (t) i,k − γvGi,v(u (t) i,k, v (t) i,k, z (t) i,k) 8: u(t)i,k+1 = u (t) i,k − γuGi,u(u (t) i,k, v (t) i,k, z (t) i,k) 9: Update v(t+1)i = v (t) i,τ and u (t+1) i = u (t) i,τ
10: Update u(t+1) = ∑ i∈S(t) αiu (t+1) i∑
i∈S(t) αi at the server with secure aggregation
11: return u(T ), v(T )1 , · · · , v (T ) n
This assumption is analogous to the bounded variance assumption (Assumption 2′), but with the stochasticity coming from the sampling of devices. It characterizes how much local steps on one device help or hurt convergence globally. Similar gradient diversity assumptions are often used for analyzing non-personalized federated learning (Koloskova et al., 2020; Karimireddy et al., 2020). Finally, it suffices for the partial gradient diversity assumption to only hold at the iterates (u(t), V (t)) generated by either FedSim or FedAlt.
A.2 CONVERGENCE ANALYSIS OF FEDSIM
We give the full form of FedSim in Algorithm 4 for the general case of unequal αi’s but focus on αi = 1/n for the analysis. In order to simplify presentation, we denote V (t) = (v (t) 1 , . . . , v (t) n ) and define the following shorthand for gradient terms
∆(t)u = ∥∥∥∇uF (u(t), V (t))∥∥∥2 , and ∆(t)v = 1n n∑ i=1 ∥∥∥∇vFi (u(t), v(t)i )∥∥∥2 . For convenience, we restate Theorem 1 from the main paper. Theorem 1 (Convergence of FedSim). Suppose Assumptions 1, 2 and 3 hold and the learning rates in FedSim are chosen as γu = η/(Luτ) and γv = η/(Lvτ) with
η ≤ min {
1 12(1+χ2)(1+ρ2) ,
√ m/n
196(1−τ−1)(1+χ2)(1+ρ2)
} .
Then, ignoring absolute constants, we have
1 T ∑T−1 t=0 ( 1 Lu E [ ∆ (t) u ] + mnLv E [ ∆ (t) v ]) ≤ ∆F0ηT + η(1 + χ 2) ( σ2u+δ 2(1−mn ) mLu + mσ2v nLv ) + η2(1− τ−1)(1 + χ2) ( σ2u+δ 2
Lu + σ2v Lv
) . (6)
Before proving the theorem, we give the following corollary with optimized learning rates. Corollary 3. Consider the setting of Theorem 1 and let ε > 0 be given. Suppose we set the learning rates γu = η/(τLu) and γv = η/(τLv), where (ignoring absolute constants),
η = ε(
δ2 Lu ( 1− mn ) + σ2u Lu + σ2vm Lun ) (1 + χ2)
∧ ε( δ2
Lu ∨ σ 2 u Lu ∨ σ 2 v Lv
) (1− τ−1)(1 + χ2)
1/2
∧ 1 (1 + χ2)(1 + ρ2) ∧( m/n (1− τ−1)(1 + ρ2)(1 + χ2) )1/2 .
We have,
1
T T−1∑ t=0
( 1 Lu E ∥∥∥∇uF (u(t), V (t))∥∥∥2 + m Lvn2 n∑ i=1 E ∥∥∥∇vFi (u(t), v(t)i )∥∥∥2 ) ≤ ε
after T communication rounds, where, ignoring absolute constants,
T ≤ ∆F0(1 + χ 2)
ε2
( σ2u + δ 2 ( 1− mn ) mLu + mσ2v nLv )
+ ∆F0
√ (1− τ−1)(1 + χ2)
ε3/2
( σu + δ√ Lu + σv√ Lv ) +
∆F0 ε (1 + χ2)(1 + ρ2)
( 1 + √ (1− τ−1)n
m
) .
Proof. The choice of the constant η ensures that each of the constant terms in the bound of Theorem 1 is O(ε). The final rate is now O ( ∆F0/(ηε) ) ; plugging in the value of η completes the proof.
We now prove Theorem 1.
Proof of Theorem 1. The proof mainly applies the smoothness upper bound to write out a descent condition with suitably small noise terms. We start with some notation.
Notation. Let F (t) denote the σ-algebra generated by ( u(t), V (t) ) and denote Et[·] = E[·|F (t)]. For all devices, including those not selected in each round, we define virtual sequences ũ(t)i,k, ṽ (t) i,k as the SGD updates in Algorithm 4 for all devices regardless of whether they are selected. For the selected devices k ∈ S(t), we have ( u
(t) i,k, v (t) i,k
) = ( ũ
(t) i,k, ṽ (t) i,k ) . Note now that the random variables ũ(t)i,k, ṽ (t) i,k
are independent of the device selection S(t). The updates for the devices i ∈ S(t) are given by
v (t+1) i = v (t) i − γv τ−1∑ k=0 Gi,v ( ũ (t) i,k, ṽ (t) i,k, z (t) i,k ) ,
and the server update is given by
u(t+1) = u(t) − γu m ∑ i∈S(t) τ−1∑ k=0 Gi,u ( ũ (t) i,k, ṽ (t) i,k, z (t) i,k ) . (9)
Proof Outline. We use the smoothness of Fi, more precisely Lemma 16, to obtain F ( u(t+1), V (t+1) ) − F ( u(t), V (t) ) ≤ 〈 ∇uF (u(t), V (t)), u(t+1) − u(t)
〉 ︸ ︷︷ ︸
T1,u
+ 1
n n∑ i=1 〈 ∇vFi(u(t), v(t)i ), v (t+1) i − v (t) i 〉 ︸ ︷︷ ︸
T1,v
+ Lu(1 + χ
2)
2 ∥∥∥u(t+1) − u(t)∥∥∥2︸ ︷︷ ︸ T2,u + 1 n n∑ i=1 Lv(1 + χ 2) 2 ∥∥∥v(t+1)i − v(t)i ∥∥∥2︸ ︷︷ ︸ T2,v .
(10)
Our goal will be to bound each of these terms to get a descent condition from each step of the form
Et
[ F ( u(t+1), V (t+1) ) − F ( u(t), V (t) )] ≤ −γuτ
8 ∥∥∥∇uF (u(t), V (t))∥∥∥2 − γvτm 8n2 n∑ i=1 ∥∥∥∇vFi(u(t), v(t)i )∥∥∥2 +O(γ2u + γ2v) ,
where the O(γ2u + γ 2 v) terms are controlled using the bounded variance and gradient diversity assumptions. Telescoping this descent condition gives the final bound.
Main Proof. Towards this end, we prove non-asymptotic bounds on each of the terms T1,v, T1,u, T2,v and T2,u, in Claims 4 to 7 respectively. We then invoke them to get the bound
Et
[ F ( u(t+1), V (t+1) ) − F ( u(t), V (t) )] ≤ −γuτ
4 ∆(t)u −
γvτm
4n ∆(t)v
+ Lu(1 + χ
2)γ2uτ 2
2
( σ2u + 12δ2
m (1−m/n)
) + Lv(1 + χ 2)γ2vτ 2σ2vm
2n
+ 2
n n∑ i=1 τ−1∑ k=0 Et ∥∥∥u(t)i,k − u(t)∥∥∥2 (L2uγu + mn χ2LuLvγv) + 2
n n∑ i=1 τ−1∑ k=0 Et ∥∥∥v(t)i,k − v(t)∥∥∥2 (mn L2vγv + χ2LuLvγu) . (11)
Note that we simplified some constants appearing on the gradient norm terms using
γu ≤ ( 12Lu(1 + χ 2)(1 + ρ2)τ )−1 and γv ≤ ( 6Lv(1 + χ 2)τ )−1 .
Our next step is to bound the last two lines of (11) with Lemma 8 and invoke the gradient diversity assumption (Assumption 3′) as
1
n n∑ i=1 ∥∥∥∇uFi(u(t), v(t)i )∥∥∥2 ≤ δ2 + (1 + ρ2)∥∥∥∇uF (u(t), V (t))∥∥∥2 . This gives, after plugging in the learning rates and further simplifying the constants,
Et
[ F ( u(t+1), V (t+1) ) − F ( u(t), V (t) )] ≤− c∆ (t) u
8Lu − cm∆
(t) v
8Lvn + c2(1 + χ2)
( σ2u
2Lu + mσ2v nLv + 6δ2 Lum
( 1− m
n )) + c3(1 + χ2)(1− τ−1) ( 24δ2
Lu + 4σ2u Lu + 4σ2v Lu
) .
Taking full expectation, telescoping the series over t = 0, · · · , T − 1 and rearranging the resulting terms give the desired bound in Theorem 1.
Claim 4 (Bounding T1,v). Let T1,v be defined as in (10). We have,
Et[T1,v] ≤ − γvτm
2n2 n∑ i=1 ∥∥∥∇vFi (u(t), v(t)i )∥∥∥2 + γvm
n n∑ i=1 τ−1∑ k=0 Et [ χ2LuLv ∥∥∥ũ(t)i,k − u(t)∥∥∥2 + L2v∥∥∥ṽ(t)i,k − v(t)i ∥∥∥2] .
Proof. Define T1,v,i to be contribution of the ith term to T1,v. For i /∈ St, we have that T1,v,i = 0, since v(t+1)i = v (t) i . On the other hand, for i ∈ S(t), we use the unbiasedness of the gradient estimator
Gi,v and the independence of z (t) i,k from u (t) i,k, v (t) i,k to get
Et [T1,v,i] = −γv τ−1∑ k=0 Et 〈 ∇vFi ( u(t), v (t) i ) ,∇vFi ( u (t) i,k, v (t) i,k )〉 = −γv
τ−1∑ k=0 Et 〈 ∇vFi ( u(t), v (t) i ) ,∇vFi ( ũ (t) i,k, ṽ (t) i,k )〉 =− γvτ
∥∥∥∇vFi (u(t), v(t)i )∥∥∥2 − γv
τ−1∑ k=0 Et 〈 ∇vFi ( u(t), v (t) i ) ,∇vFi ( ũ (t) i,k, ṽ (t) i,k ) −∇vFi ( u(t), v (t) i )〉 ≤ −γvτ
2 ∥∥∥∇vFi (u(t), v(t)i )∥∥∥2 + γv2 τ−1∑ k=0 Et ∥∥∥∇vFi (ũ(t)i,k, ṽ(t)i,k)−∇vFi (u(t), v(t)i )∥∥∥2 . (12) For the second term, we add and subtract∇vFi ( u(t), ṽ (t) i,k ) and use smoothness to get
∥∥∥∇vFi (ũ(t)i,k, ṽ(t)i,k)−∇vFi (u(t), v(t)i )∥∥∥2 ≤ 2χ2LuLv∥∥∥ũ(t)i,k − u(t)∥∥∥2 + 2L2v∥∥∥ṽ(t)i,k − v(t)i ∥∥∥2 . (13)
Since the right hand side of this bound is independent of St, we get,
Et[T1,v] = m
n Et 1 m ∑ i∈S(t) T1,v,i = m n2 n∑ i=1 Et[T1,v,i] ,
and plugging in (12) and (13) completes the proof.
Claim 5 (Bounding T1,u). Consider T1,u defined in (10). We have the bound,
Et[T1,u] ≤ − γuτ
2 ∥∥∥∇uF (u(t), V (t))∥∥∥2 + γu n n∑ i=1 τ−1∑ k=0 Et [ L2u ∥∥∥ũ(t)i,k − u(t)∥∥∥2 + χ2LuLv∥∥∥ṽ(t)i,k − v(t)i ∥∥∥2] .
Proof. Due to the independence of S(t) from ũ(t)i,k, ṽ (t) i,k, we have,
Et
[ u(t+1) − u(t) ] = −γuEt 1 m ∑ i∈S(t) τ−1∑ k=0 ∇uFi ( u (t) i,k, v (t) i,k ) = −γuEt 1 m ∑ i∈S(t) τ−1∑ k=0 ∇uFi ( ũ (t) i,k, ṽ (t) i,k
) = −γu
n n∑ i=1 τ−1∑ k=0 Et [ ∇uFi ( ũ (t) i,k, ṽ (t) i,k )] ,
where the last equality took an expectation over S(t), which is independent of ũ(t)i,k, ṽ (t) i,k. Now, using the same sequence of arguments as Claim 4, we have,
Et 〈 ∇uF ( u(t), V (t) ) , u(t+1) − u(t) 〉 = −γu
τ−1∑ k=0 Et
〈 ∇uF ( u(t), V (t) ) , 1
n n∑ i=1 ∇uFi ( ũ (t) i,k, ṽ (t) i,k
)〉
≤ −γuτ 2 ∥∥∥∇uF (u(t), V (t))∥∥∥2 + γu 2 τ−1∑ k=0 Et ∥∥∥∥∥ 1n n∑ i=1 ∇uFi ( ũ (t) i,k, ṽ (t) i,k ) −∇uF ( u(t), V (t) )∥∥∥∥∥ 2
(∗) ≤ −γuτ
2 ∥∥∥∇uF (u(t), V (t))∥∥∥2 + γu 2n n∑ i=1 τ−1∑ k=0 Et ∥∥∥∇uFi (ũ(t)i,k, ṽ(t)i,k)−∇uFi (u(t), v(t)i )∥∥∥2 ≤ −γuτ
2 ∥∥∥∇uF (u(t), V (t))∥∥∥2 + γu n n∑ i=1 τ−1∑ k=0 Et [ L2u ∥∥∥ũ(t)i,k − u(t)∥∥∥2 + L2uv∥∥∥ṽ(t)i,k − v(t)i ∥∥∥2] , where the inequality (∗) follows from Jensen’s inequality as∥∥∥∥∥ 1n n∑ i=1 ∇uFi ( ũ (t) i,k, ṽ (t) i,k ) −∇uF ( u(t), V (t) )∥∥∥∥∥ 2 ≤ 1 n n∑ i=1 ∥∥∥∇uFi (ũ(t)i,k, ṽ(t)i,k)−∇uFi (u(t)i,k, v(t))∥∥∥2 .
Claim 6 (Bounding T2,v). Consider T2,v as defined in (10). We have the bound,
Et[T2,v] ≤ 3Lv(1 + χ
2)γ2vτ 2m
2n2
n∑ i=1 ∥∥∥∇vFi (u(t), v(t)i )∥∥∥2 + Lv(1 + χ2)γ2vτ2mσ2v2n + 3Lv(1 + χ 2)γ2vτm
2n2
n∑ i=1 τ−1∑ k=0 Et [ L2v ∥∥∥ṽ(t)i,k − v(t)i ∥∥∥2 + χ2LuLv∥∥∥ũ(t)i,k − u(t)∥∥∥2] . Proof. We start with
Et ∥∥∥ṽ(t)k,τ − v(t)∥∥∥2 = γ2vEt ∥∥∥∥∥ τ−1∑ k=0 Gi,v ( ũ (t) i,k, ṽ (t) i,k, z (t) i,k )∥∥∥∥∥ 2
≤ γ2vτ τ−1∑ k=0 Et ∥∥∥Gi,v (ũ(t)i,k, ṽ(t)i,k, z(t)i,k)∥∥∥2 ≤ γ2vτ2σ2v + γ2vτ
τ−1∑ k=0 Et ∥∥∥∇vFi (ũ(t)i,k, ṽ(t)i,k)∥∥∥2 ≤ γ2vτ2σ2v + 3γ2vτ2
∥∥∥∇vFi (u(t), v(t)i )∥∥∥2 + 3γ2vτ
τ−1∑ k=0 Et [ L2v ∥∥∥ṽ(t)i,k − v(t)i ∥∥∥2 + χ2LuLv∥∥∥ũ(t)i,k − u(t)∥∥∥2] . Using (a) v(t+1)i = ṽ (t) i,τ for i ∈ S(t), and, (b) S(t) is independent from ũ (t) i,k, ṽ (t) i,k, we get,
Et[T2,v] = Lv(1 + χ
2)m
2n Et 1 m ∑ i∈S(t) ∥∥∥ṽ(t)i,τ − v(t)i ∥∥∥2
≤ Lv(1 + χ 2)m
2n2
n∑ i=1 Et ∥∥∥ṽ(t)i,τ − v(t)i ∥∥∥2 Plugging in the bound Et ∥∥∥ṽ(t)i,τ − v(t)∥∥∥2 completes the proof.
Claim 7 (Bounding T2,u). Consider T2,u as defined in (10). We have,
Et[T2,u] ≤ Lu(1 + χ
2)γ2uτ 2
2m
( σ2u + 12δ 2 (
1− m n )) + 3Lu(1 + χ 2)γ2uτ 2(1 + ρ2)
∥∥∥∇uFi (u(t), V (t))∥∥∥2 + 3Lu(1 + χ 2)γ2uτ
2n
n∑ i=1 τ−1∑ k=0 Et [ L2u ∥∥∥ũ(t)i,k − u(t)∥∥∥2 + χ2LuLv∥∥∥ṽ(t)i,k − v(t)i ∥∥∥2] . Proof. We proceed with the first two inequalities as in the proof of Claim 6 to get
Et ∥∥∥u(t+1) − u(t)∥∥∥2 ≤ γ2uτ2σ2u m + γ2uτ τ−1∑ k=0 Et ∥∥∥∥∥∥ 1m ∑ i∈S(t) ∇uFi ( ũ (t) i,k, ṽ (t) i,k )∥∥∥∥∥∥ 2
︸ ︷︷ ︸ =:T3,j
.
For T3,j , (a) we add and subtract ∇uF (u(t), V (t)) and ∇uFi(u(t), ṽ(t)i,k), (b) invoke the squared triangle inequality, and, (c) use smoothness to get
T3,j = 6Et ∥∥∥∥∥∥ 1m ∑ i∈S(t) ∇uFi ( u(t), v (t) i ) −∇uF ( u(t), V (t) )∥∥∥∥∥∥ 2 + 6 ∥∥∥∇uF (u(t), V (t))∥∥∥2
+ 3Et 1 m ∑ i∈S(t) ( L2u ∥∥∥ũ(t)i,k − u(t)∥∥∥2 + χ2LuLv∥∥∥ṽ(t)i,k − v(t)i ∥∥∥2)
For the first term, we use the fact that S(t) is obtained by sampling without replacement to apply Lemma 17 together with the gradient diversity assumption to get
Et ∥∥∥∥∥∥ 1m ∑ i∈S(t) ∇uFi ( u(t), v (t) i ) −∇uF ( u(t), V (t) )∥∥∥∥∥∥ 2
≤ 1 m ( n−m n− 1 ) 1 n n∑ i=1 ∥∥∥∇uFi (u(t), v(t)i )−∇uF (u(t), V (t))∥∥∥2 ≤ 1 m ( n−m n− 1 )( δ2 + ρ2
∥∥∥∇uF (u(t), V (t))∥∥∥2) . Therefore,
T3,j = 12δ2
m
( 1− m
n
) + 6(1 + ρ2) ∥∥∥∇uF (u(t), V (t))∥∥∥2 + 3
n n∑ i=1 Et [ L2u ∥∥∥ũ(t)i,k − u(t)∥∥∥2 + χ2LuLv∥∥∥ṽ(t)i,k − v(t)i ∥∥∥2] , where we also used the independence between S(t) and (ũ(t)i,k, ṽ (t) i,k). Plugging this into the expression for Et‖u(t+1) − u(t)‖2 completes the proof.
Lemma 8. Let Fi satisfy Assumptions 1′-3′, and consider the iterates
uk+1 = uk − γuGi,u(uk, vk, zk) , and, vk+1 = vk − γvGi,v(uk, vk, zk) ,
for k = 0, · · · , τ − 1, where zk ∼ Di. Suppose the learning rates satisfy γu = cu/(τLu) and γv = cv/(τLv) with cu, cv ≤ 1/ √ 6 max{1, χ−2}. Further, define,
A = γuL 2 u + fχ 2γvLuLv , and, B = fγvL2v + χ 2γuLuLv ,
where f ∈ (0, 1] is given. Then, we have the bound,
τ−1∑ k=0 E [ A‖uk − u0‖2+B‖vk − u0‖2 ] ≤ 4τ2(τ − 1) ( γ2uσ 2 uA+ γ 2 vσ 2 vB )
+ 12τ2(τ − 1) ( γ2uA‖∇uFi(u0, v0)‖2 + γ2vB‖∇vFi(u0, v0)‖2 ) .
Proof. If τ = 1, there is nothing to prove, so we assume τ > 1. Let ∆k := A‖uk − u0‖2 +B‖vk − v0‖2 and denote by Fk the sigma-algebra generated by (wk, vk). Further, let Ek[·] = E[·|Fk]. We use the inequality 2αβ ≤ α2/δ2 + δ2β2 for reals α, β, δ to get,
Ek‖uk+1 − u0‖2 ≤ ( 1 + 1
τ − 1
) ‖uk − u0‖2 + τγ2uEk‖Gi,u(uk, vk, zk)‖ 2
≤ ( 1 + 1
τ − 1
) ‖uk − u0‖2 + τγ2uσ2u + τγ2u‖∇uFi(uk, vk)‖ 2
≤ ( 1 + 1
τ − 1
) ‖uk − u0‖2 + τγ2uσ2u + 3τγ2u‖∇uFi(u0, v0)‖ 2
+ 3τγ2uL 2 u‖uk − u0‖2 + 3τγ2uLuv‖vk − v0‖2 ,
where the last inequality followed from the squared triangle inequality (from adding and subtracting ∇uFi(u0, vk) and∇uFi(u0, v0)) followed by smoothness. Together with the analogous inequality for the v-update, we get,
Ek[∆k+1] ≤ ( 1 + 1
τ − 1
) ∆k +A ′‖uk − u0‖2 +B′‖vk − v0‖2 + C ,
where we have
A′ = 3τ(γ2uL 2 uA+ γ 2 vχ 2LuLvB), and, B′ = 3τ(γ2vL 2 vB + γ 2 uχ 2LuLvA) and,
C ′ = τγ2uσ 2 uA+ τγ 2 vσ 2 vB + 3τγ 2 uA‖∇uFi(u0, v0)‖2 + 3τγ2vB‖∇vFi(u0, v0)‖2 .
Next, we apply Lemma 20 to get that A′ ≤ A/τ and B′ ≤ B/τ under the assumed conditions on the learning rates; this allows us to write the right hand side completely in terms of ∆k and unroll the recurrence. The intuition behind Lemma 20 is as follows. Ignoring the dependence on τ, Lu, Lv, χ for a moment, if γu and γv are both O(η), then A′, B′ are both O(η3), while A and B are O(η). Thus, making η small enough should suffice to get A′ ≤ O(A) and B′ ≤ O(B). Concretely, Lemma 20 gives
Ek[∆k+1] ≤ ( 1 + 2
τ − 1
) E[∆k] + C ,
and unrolling this recurrence gives for k ≤ τ − 1
E[∆k] ≤ k−1∑ j=0 ( 1 + 2 τ − 1 )j C ≤ τ − 1 2 ( 1 + 2 τ − 1 )k C
≤ τ − 1 2
( 1 + 2
τ − 1
)τ−1 C ≤ e 2
2 (τ − 1)C ,
where we used (1 + 1/α)α ≤ e for all α > 0. Summing over k and using the numerical bound e2 < 8 completes the proof.
Remark 9. We only invoked the partial gradient diversity assumption (Assumption 3) at iterates (u(t), V (t)); therefore, it suffices if the assumption only holds at iterates (u(t), V (t)) generated by FedSim, rather than at all (u, V ).
Algorithm 5 FedAlt: Alternating updates of shared and personalized parameters
Input: Initial iterates u(0), V (0), Number of communication rounds T , Number of devices per round m, Number of local updates τu, τv , Local step sizes γu, γv ,
1: for t = 0, 1, · · · , T − 1 do 2: Sample m devices from [n] without replacement in S(t) 3: for each selected device i ∈ S(t) in parallel do 4: Initialize v(t)i,0 = v (t) i 5: for k = 0, · · · , τv − 1 do . Update personalized parameters 6: Sample data z(t)i,k ∼ Di 7: v(t)i,k+1 = v (t) i,k − γvGi,v(u(t), v (t) i,k, z (t) i,k) 8: Update v(t+1)i = v (t) k,τv 9: Initialize u(t)i,0 = u (t)
10: for k = 0, · · · , τu − 1 do . Update shared parameters 11: u(t)i,k+1 = u (t) i,k − γuGi,u(u (t) i,k, v (t+1) i , z (t) i,k) 12: Update u(t+1)i = u (t) i,τu
13: Update u(t+1) = ∑ i∈S(t) αiu (t+1) i∑
i∈S(t) αi at the server with secure aggregation
14: return u(T ), v(T )1 , · · · , v (T ) n
A.3 CONVERGENCE ANALYSIS OF FEDALT
We give the full form of FedAlt in Algorithms 5 for the general case of unequal αi’s but focus on αi = 1/n for the analysis. For convenience, we reiterate Theorem 2 below. Recall the definitions
∆(t)u = ∥∥∥∇uF (u(t), V (t+1))∥∥∥2 , and, ∆(t)v = 1n n∑ i=1 ∥∥∥∇vFi (u(t), v(t)i )∥∥∥2 . Theorem 2 (Convergence of FedAlt). Suppose Assumptions 1, 2 and 3 hold and the learning rates in FedAlt are chosen as γu = η/(Luτu) and γv = η/(Lvτv), with
η ≤ min {
1 24(1+ρ2) , m 128χ2(n−m) , √ m χ2n } .
Then, ignoring absolute constants, we have
1 T ∑T−1 t=0 ( 1 Lu E [ ∆ (t) u ] + mnLv E [ ∆ (t) v ]) ≤ ∆F0
ηT + η
( σ2u+δ
2(1−mn ) mLu + σ2v Lv m+χ2(n−m) n ) + η2 ( σ2u+δ 2
Lu (1− τ−1u ) + σ2vm Lvn (1− τ−1v ) + χ2σ2v Lv
) .
Before proving the theorem, we have the corollary with optimized learning rates.
Corollary 10. Consider the setting of Theorem 2 and fix some ε > 0. Suppose we set γu = η/(τLu) and γv = η/(τLv) such that, ignoring absolute constants,
η = ( σ2v εLv (m n + χ2(1−m/n) ))−1∧(σ2u + δ2(1−m/n) mLuε )−1∧(σ2u + δ2 Luε (1− τ−1u ) )−1/2
∧( σ2vm Lvnε (1− τ−1v ) )−1/2∧ 1 1 + ρ2 ∧ m χ2(n−m) ∧√ m χ2n .
Then, we have,
1
T T−1∑ t=0
( 1 Lu E ∥∥∥∇uF (u(t), Ṽ (t))∥∥∥2 + m Lvn2 n∑ i=1 E ∥∥∥∇vFi (u(t), v(t)i )∥∥∥2 ) ≤ ε
after T communication rounds, where, ignoring absolute constants,
T ≤ ∆F0 ε2
( σ2u + δ 2 ( 1− mn ) mLu + σ2v Lv (m n + χ2 ( 1− m n )))
+ ∆F0 ε3/2 ( σu + δ√ Lu √ 1− τ−1u + σv√ Lv √ 1− τ−1v ) +
∆F0 ε
( 1 + ρ2 + χ2 ( n m − 1 ) + √ χ2n m ) .
Proof. We get the bound by balancing terms from the bound of Theorem 2. The choice of η ensures that all the O(η) and O(η2) terms are at most O(ε). Finally, the smallest number of communication rounds to make the left hand side of the bound of Theorem 2 smaller than ε is ∆F0/(ηε).
We are now ready to prove Theorem 2.
Proof of Theorem 2. The proof mainly applies the smoothness upper bound to write out a descent condition with suitably small noise terms. We start with some notation.
We introduce the notation ∆̃(t)u as the analogue of ∆ (t) u with the virtual variable Ṽ (t+1):
∆̃(t)u = ∥∥∥∇uF (u(t), V (t))∥∥∥2 .
Notation. Let F (t) denote the σ-algebra generated by ( u(t), V (t) ) and denote Et[·] = E[·|F (t)]. For all devices, including those not selected in each round, we define virtual sequences ũ(t)i,k, ṽ (t) i,k as the SGD updates in Algorithm 5 for all devices regardless of whether they are selected. For the selected devices i ∈ S(t), we have v(t)i,k = ṽ (t) i,k and u (t) i,k = ũ (t) i,k. Note now that the random variables ũ (t) i,k, ṽ (t) i,k are independent of the device selection S(t). Finally, we have that the updates for the selected devices i ∈ S(t) are given by
v (t+1) i = v (t) i − γv τv−1∑ k=0 Gi,v ( u(t), ṽ (t) i,k, z (t) i,k ) ,
and the server update is given by
u(t+1) = u(t) − γu m ∑ i∈S(t) τu−1∑ k=0 Gi,u ( ũ (t) i,k, ṽ (t) i,τv , z (t) i,k ) .
Proof Outline and the Challenge of Dependent Random Variables. We start with F ( u(t+1), V (t+1) ) − F ( u(t), V (t) ) =F ( u(t), V (t+1) ) − F ( u(t), V (t) ) + F ( u(t+1), V (t+1) ) − F ( u(t), V (t+1) ) .
(14)
The first line corresponds to the effect of the v-step and the second line to the u-step. The former is easy to handle with standard techniques that rely on the smoothness of F ( u(t), · ) . The latter is more challenging. In particular, the smoothness bound for the u-step gives us
F ( u(t+1), V (t+1) ) − F ( u(t), V (t+1) ) ≤ 〈 ∇uF ( u(t), V (t+1) ) , u(t+1) − u(t) 〉 + Lu 2
∥∥∥u(t+1) − u(t)∥∥∥2 . The standard proofs of convergence of stochastic gradient methods rely on the fact that we can take an expectation w.r.t. the sampling S(t) of devices for the first order term. However, both V (t+1) and
u(t+1) depend on the sampling S(t) of devices. Therefore, we cannot directly take an expectation with respect to the sampling of devices in S(t).
Virtual Full Participation to Circumvent Dependent Random Variables. The crux of the proof lies in replacing V (t+1) in the analysis of the u-step with the virtual iterate Ṽ (t+1) so as to move all the dependence of the u-step on S(t) to the u(t+1) term. This allows us to take an expectation; it remains to carefully bound the resulting error terms.
Finally, we will arrive at a bound of | 1. What is the focus of the paper regarding partial personalization in federated learning?
2. What are the strengths and weaknesses of the proposed algorithms, particularly FedSim and FedAlt?
3. How does the reviewer assess the novelty and contribution of the paper compared to prior works, such as [1]?
4. What are the key challenges in deriving the convergence results for FedAlt in nonconvex cases?
5. How does the reviewer evaluate the significance and interest of the experimental results presented in the paper?
6. How could the authors improve the paper's presentation and emphasis on theoretical challenges and novelty? | Summary Of The Paper
Review | Summary Of The Paper
The paper discusses using partial personalization objective function defined in equation (3) to achieve personalized models in federated learning. While the idea of partial personalization and the objective of (3) have been widely studied in previous literatures, the main contributions of the paper are the convergence analysis of two proposed algorithms FedSim and FedAlt in nonconvex case. The authors also did extensive experiments to compare partial personalized models with fully personalized models by using various model structures and on different datasets.
Review
My main concern about the paper is the limited novelty. The idea of partial personalization has been proposed in previous literature. The objective function (3) has been detailly studied in [1]. Moreover, the convergence of FedSim for nonconvex case has also been studied in [1] under very similar assumptions (See Section 3.1 of [1], where the name of FedSim is LSGD-PFL). Thus, the only new result left is the analysis of FedAlt under nonconvex setting. In order to derive Theorem 2, the authors claim that they proposed a new technique called virtual full participation; however, this technique is actually not new and has been used widely in convergence analysis for FedAvg/Local-SGD. This way, the whole contribution of the paper relies on the analysis of FedAlt in nonconvex case based on well-known techniques, which seems quite weak contribution to me. I would recommend the authors to give more detailed explanation on what the key theoretical challenge in deriving Theorem 2 is.
The extensive experiments of using different model structures on different datastes is appreciated. And the observation personalization can hurt the test accuracy on some devices is interesting. But given the idea of partial personalization is proposed elsewhere, the gain from the experiments is limited.
The writing is clear. I can easily understand the paper. Thanks for that.
In a word, given the ideas used in the paper are now new, I think this paper is maybe better written in a pure theoretical one. The key would be to stress the theoretical challenge and novelty. Based on the current version, I believe the contribution is not novel enough or at least it is not stated clearly.
[1] Filip Hanzely, Boxin Zhao, and Mladen Kolar. Personalized Federated Learning: A Unified Framework and Universal Optimization Techniques. arXiv Preprint, 2021. |
ICLR | Title
Bayesian Prediction of Future Street Scenes using Synthetic Likelihoods
Abstract
For autonomous agents to successfully operate in the real world, the ability to anticipate future scene states is a key competence. In real-world scenarios, future states become increasingly uncertain and multi-modal, particularly on long time horizons. Dropout based Bayesian inference provides a computationally tractable, theoretically well grounded approach to learn likely hypotheses/models to deal with uncertain futures and make predictions that correspond well to observations – are well calibrated. However, it turns out that such approaches fall short to capture complex real-world scenes, even falling behind in accuracy when compared to the plain deterministic approaches. This is because the used log-likelihood estimate discourages diversity. In this work, we propose a novel Bayesian formulation for anticipating future scene states which leverages synthetic likelihoods that encourage the learning of diverse models to accurately capture the multi-modal nature of future scene states. We show that our approach achieves accurate state-of-the-art predictions and calibrated probabilities through extensive experiments for scene anticipation on Cityscapes dataset. Moreover, we show that our approach generalizes across diverse tasks such as digit generation and precipitation forecasting.
N/A
1 INTRODUCTION
The ability to anticipate future scene states which involves mapping one scene state to likely future states under uncertainty is key for autonomous agents to successfully operate in the real world e.g., to anticipate the movements of pedestrians and vehicles for autonomous vehicles. The future states of street scenes are inherently uncertain and the distribution of outcomes is often multi-modal. This is especially true for important classes like pedestrians. Recent works on anticipating street scenes (Luc et al., 2017; Jin et al., 2017; Seyed et al., 2018) do not systematically consider uncertainty.
Bayesian inference provides a theoretically well founded approach to capture both model and observation uncertainty but with considerable computational overhead. A recently proposed approach (Gal & Ghahramani, 2016b; Kendall & Gal, 2017) uses dropout to represent the posterior distribution of models and capture model uncertainty. This approach has enabled Bayesian inference with deep neural networks without additional computational overhead. Moreover, it allows the use of any existing deep neural network architecture with minor changes.
However, when the underlying data distribution is multimodal and the model set under consideration do not have explicit latent state/variables (as most popular deep deep neural network architectures), the approach of Gal & Ghahramani (2016b); Kendall & Gal (2017) is unable to recover the true model uncertainty (see Figure 1 and Osband (2016)). This is because this approach is known to conflate risk and uncertainty (Osband, 2016). This limits the accuracy of the models over a plain deterministic (non-Bayesian) approach. The main cause is the data log-likelihood maximization step
during optimization – for every data point the average likelihood assigned by all models is maximized. This forces every model to explain every data point well, pushing every model in the distribution to the mean. We address this problem through an objective leveraging synthetic likelihoods (Wood, 2010; Rosca et al., 2017) which relaxes the constraint on every model to explain every data point, thus encouraging diversity in the learned models to deal with multi-modality.
In this work: 1. We develop the first Bayesian approach to anticipate the multi-modal future of street scenes and demonstrate state-of-the-art accuracy on the diverse Cityscapes dataset without compromising on calibrated probabilities, 2. We propose a novel optimization scheme for dropout based Bayesian inference using synthetic likelihoods to encourage diversity and accurately capture model uncertainty, 3. Finally, we show that our approach is not limited to street scenes and generalizes across diverse tasks such as digit generation and precipitation forecasting.
2 RELATED WORK
Bayesian deep learning. Most popular deep learning models do not model uncertainty, only a mean model is learned. Bayesian methods (MacKay, 1992; Neal, 2012) on the other hand learn the posterior distribution of likely models. However, inference of the model posterior is computationally expensive. In (Gal & Ghahramani, 2016b) this problem is tackled using variational inference with an approximate Bernoulli distribution on the weights and the equivalence to dropout training is shown. This method is further extended to convolutional neural networks in (Gal & Ghahramani, 2016a). In (Kendall & Gal, 2017) this method is extended to tackle both model and observation uncertainty through heteroscedastic regression. The proposed method achieves state of the art results on segmentation estimation and depth regression tasks. This framework is used in Bhattacharyya et al. (2018a) to estimate future pedestrian trajectories. In contrast, Saatci & Wilson (2017) propose a (unconditional) Bayesian GAN framework for image generation using Hamiltonian Monte-Carlo based optimization with limited success. Moreover, conditional variants of GANs (Mirza & Osindero, 2014) are known to be especially prone to mode collapse. Therefore, we choose a dropout based Bayesian scheme and improve upon it through the use of synthetic likelihoods to tackle the issues with model uncertainty mentioned in the introduction.
Structured output prediction. Stochastic feedforward neural networks (SFNN) and conditional variational autoencoders (CVAE) have also shown success in modeling multimodal conditional distributions. SFNNs are difficult to optimize on large datasets (Tang & Salakhutdinov, 2013) due to the binary stochastic variables. Although there has been significant effort in improving training efficiency (Rezende et al., 2014; Gu et al., 2016), success has been partial. In contrast, CVAEs (Sohn et al., 2015) assume Gaussian stochastic variables, which are easier to optimize on large datasets using the re-parameterization trick. CVAEs have been successfully applied on a large variety of tasks, include conditional image generation (Bao et al., 2017), next frame synthesis (Xue et al., 2016), video generation (Babaeizadeh et al., 2018; Denton & Fergus, 2018), trajectory prediction (Lee et al., 2017) among others. The basic CVAE framework is improved upon in (Bhattacharyya et al., 2018b) through the use of a multiple-sample objective. However, in comparison to Bayesian methods, careful architecture selection is required and experimental evidence of uncertainty calibration is missing. Calibrated uncertainties are important for autonomous/assisted driving, as users need to be able to express trust in the predictions for effective decision making. Therefore, we also adopt a Bayesian approach over SFNN or CVAE approaches.
Anticipation future scene scenes. In (Luc et al., 2017) the first method for predicting future scene segmentations has been proposed. Their model is fully convolutional with prediction at multiple scales and is trained auto-regressively. Jin et al. (2017) improves upon this through the joint prediction of future scene segmentation and optical flow. Similar to Luc et al. (2017) a fully convolutional model is proposed, but the proposed model is based on the Resnet-101 (He et al., 2016) and has a single prediction scale. More recently, Luc et al. (2018) has extended the model of Luc et al. (2017) to the related task of future instance segmentation prediction. These methods achieve promising results and establish the competence of fully convolutional models. In (Seyed et al., 2018) a Convolutional LSTM based model is proposed, further improving short-term results over Jin et al. (2017). However, fully convolutional architectures have performed well at a variety of related tasks, including segmentation estimation (Yu & Koltun, 2016; Zhao et al., 2017), RGB frame prediction
(Mathieu et al., 2016; Babaeizadeh et al., 2018) among others. Therefore, we adopt a standard ResNet based fully-convolutional architecture, while providing a full Bayesian treatment.
3 BAYESIAN MODELS FOR PREDICTION UNDER UNCERTAINTY
We phrase our models in a Bayesian framework, to jointly capture model (epistemic) and observation (aleatoric) uncertainty (Kendall & Gal, 2017). We begin with model uncertainty.
3.1 MODEL UNCERTAINTY
Let x ∈ X be the input (past) and y ∈ Y be the corresponding outcomes. Consider f : x 7→ y, we capture model uncertainty by learning the distribution p(f |X,Y) of generative models f , likely to have generated our data {X,Y}. The complete predictive distribution of outcomes y is obtained by marginalizing over the posterior distribution,
p(y|x,X,Y) = ∫ p(y|x, f)p(f |X,Y)df . (1)
However, the integral in (1) is intractable. But, we can approximate it in two steps (Gal & Ghahramani, 2016b). First, we assume that our models can be described by a finite set of variables ω. Thus, we constrain the set of possible models to ones that can be described with ω. Now, (1) is equivalently,
p(y|x,X,Y) = ∫ p(y|x, ω)p(ω|X,Y)dω . (2)
Second, we assume an approximating variational distribution q(ω) of models which allows for efficient sampling. This results in the approximate distribution,
p(y|x,X,Y) ≈ p(y|x) = ∫ p(y|x, ω)q(ω)dω . (3)
For convolutional models, Gal & Ghahramani (2016a) proposed a Bernoulli variational distribution defined over each convolutional patch. The number of possible models is exponential in the number of patches. This number could be very large, making it difficult optimize over this very large set of models. In contrast, in our approach (4), the number possible models is exponential in the number of weight parameters, a much smaller number. In detail, we choose the set of convolutional kernels and the biases {(W1, b1), . . . , (WL, bL)} ∈ W of our model as the set of variables ω. Then, we define the following novel approximating Bernoulli variational distribution q(ω) independently over each element wi,jk′,k (correspondingly bk) of the kernels and the biases at spatial locations {i, j},
q(WK) =MK ZK zi,jk′,k = Bernoulli(pK), k ′ = 1, . . . , |K ′|, k = 1, . . . , |K| . (4)
Note, denotes the hadamard product, Mk are tuneable variational parameters, zi,jk′,k ∈ ZK are the independent Bernoulli variables, pK is a probability tensor equal to the size of the (bias) layer, |K| (|K ′|) is the number of kernels in the current (previous) layer. Here, pK is chosen manually. Moreover, in contrast to Gal & Ghahramani (2016a), the same (sampled) kernel is applied at each spatial location leading to the detection of the same features at varying spatial locations. Next, we describe how we capture observation uncertainty.
3.2 OBSERVATION UNCERTAINTY
Observation uncertainty can be captured by assuming an appropriate distribution of observation noise and predicting the sufficient statistics of the distribution (Kendall & Gal, 2017). Here, we assume a Gaussian distribution with diagonal covariance matrix at each pixel and predict the mean vector µi,j and co-variance matrix σi,j of the distribution. In detail, the predictive distribution of a generative model draw from ω̂ ∼ q(ω) at a pixel position {i, j} is,
pi,j(y|x, ω̂) = N ( (µi,j |x, ω̂), (σi,j |x, ω̂) ) . (5)
We can sample from the predictive distribution p(y|x) (3) by first sampling the weight matrices ω from (4) and then sampling from the Gaussian distribution in (5). We perform the last step by the linear transformation of a zero mean unit diagonal variance Gaussian, ensuring differentiability,
ŷi,j ∼ µi,j(x|ω̂) + z × σi,j(x|ω̂), where p(z) is N (0, I) and ω̂ ∼ q(ω) . (6)
where, ŷi,j is the sample drawn at a pixel position {i, j} through the liner transformation of z (a vector) with the predicted mean µi,j and variance σi,j . In case of street scenes, yi,j is a class-confidence vector and sample of final class probabilities is obtained by pushing ŷi,j through a softmax.
3.3 TRAINING
For a good variational approximation (3), our approximating variational distribution of generative models q(ω) should be close to the true posterior p(ω|X,Y). Therefore, we minimize the KL divergence between these two distributions. As shown in Gal & Ghahramani (2016b;a); Kendall & Gal (2017) the KL divergence is given by (over i.i.d data points),
KL(q(ω) || p(ω|X,Y)) ∝KL(q(ω) || p(ω))− ∫ q(ω) log p(Y|X, ω)dω
=KL(q(ω) || p(ω))− ∫ q(ω) ( ∫ log p(y|x, ω)d(x, y) ) dω.
=KL(q(ω) || p(ω))− ∫ ( ∫ q(ω) log p(y|x, ω)dω ) d(x, y).
(7)
The log-likelihood term at the right of (7) considers every model for every data point. This imposes the constraint that every data point must be explained well by every model. However, if the data distribution (x, y) is multi-modal, this would push every model to the mean of the multi-modal distribution (as in Figure 1 where only way for models to explain both modes is to converge to the mean). This discourages diversity in the learned modes. In case of multi-modal data, we would not be able to recover all likely models, thus hindering our ability to fully capture model uncertainty. The models would be forced to explain the data variation as observation noise (Osband, 2016), thus conflating model and observation uncertainty. We propose to mitigate this problem through the use of an approximate objective using synthetic likelihoods (Wood, 2010; Rosca et al., 2017) – obtained from a classifier. The classifier estimates the likelihood based on whether the models ω̂ ∼ q(ω) explain (generate) data samples likely under the true data distribution p(y|x). This removes the constraint on models to explain every data point – it only requires the explained (generated) data points to be likely under the data distribution. Thus, this allows models ω̂ ∼ q(ω) to be diverse and deal with multi-modality. Next, we reformulate the KL divergence estimate of (7) to a likelihood ratio form which allows us to use a classifier to estimate (synthetic) likelihoods, (also see Appendix),
=KL(q(ω) || p(ω))− ∫ ( ∫ q(ω) log p(y|x, ω)dω ) d(x, y).
=KL(q(ω) || p(ω))− ∫ (∫ q(ω) ( log
p(y|x, ω) p(y|x)
+ log p(y|x) ) dω ) d(x, y).
∝KL(q(ω) || p(ω))− ∫ ∫
q(ω) log p(y|x, ω) p(y|x) dω d(x, y).
(8)
In the second step of (8), we divide and multiply the probability assigned to a data sample by a model p(y|x, ω) by the true conditional probability p(y|x) to obtain a likelihood ratio. We can estimate the KL divergence by equivalently estimating this ratio rather than the true likelihood. In order to (synthetically) estimate this likelihood ratio, let us introduce the variable θ to denote, p(y|x, θ = 1) the probability assigned by our model ω to a data sample (x, y) and p(y|x, θ = 0) the true probability of the sample. Therefore, the ratio in the last term of (8) is,
=KL(q(ω) || p(ω))− ∫ ∫
q(ω) log p(y|x, θ = 1) p(y|x, θ = 0) dω d(x, y).
=KL(q(ω) || p(ω))− ∫ ∫
q(ω) log p(θ = 1|x, y) p(θ = 0|x, y) dω d(x, y). (Using Bayes theorem)
=KL(q(ω) || p(ω))− ∫ ∫ q(ω) log p(θ = 1|x, y)
1− p(θ = 1|x, y) dω d(x, y).
(9)
In the last step of (9) we use the fact that the events θ = 1 and θ = 0 are mutually exclusive. We can approximate the ratio p(θ=1|x,y)1−p(θ=1|x,y) by jointly learning a discriminator D(x, ŷ) that can distinguish between samples of the true data distribution and samples (x, ŷ) generated by the model ω, which provides a synthetic estimate of the likelihood, and equivalently integrating directly over (x, ŷ),
≈KL(q(ω) || p(ω))− ∫ ∫ q(ω) log ( D(x, ŷ) 1−D(x, ŷ) ) dω d(x, ŷ). (10)
Note that the synthetic likelihood ( D(x,ŷ) 1−D(x,ŷ) ) is independent of any specific pair (x, y) of the true data distribution (unlike the log-likelihood term in (7)), its value depends only upon whether the generated data point (x, ŷ) by the model ω is likely under the true data distribution p(y|x). Therefore, the models ω have to only generate samples (x, ŷ) likely under the true data distribution. The models need not explain every data point equally well. Therefore, we do not push the models ω to the mean, thus allowing them to be diverse and allowing us to better capture uncertainty.
Empirically, we observe that a hybrid log-likelihood term using both the log-likelihood terms of (10) and (7) with regularization parameters α and β (with α ≥ β) stabilizes the training process,
α ∫ ∫ q(ω) log ( D(x, ŷ) 1−D(x, ŷ) ) dω d(x, ŷ) + β ∫ ∫ q(ω) log p(y|x, ω)dω d(x, y). (11)
Note that, although we do not explicitly require the posterior model distribution to explain all data points, due to the exponential number of models afforded by dropout and the joint optimization (min-max game) of the discriminator, empirically we see very diverse models explaining most data points. Moreover, empirically we also see that predicted probabilities remain calibrated. Next, we describe the architecture details of our generative models ω and the discriminator D(x, ŷ).
3.4 MODEL ARCHITECTURE FOR STREET SCENE PREDICTION
The architecture of our ResNet based generative models in our model distribution q(ω) is shown in Figure 2. The generative model takes as input a sequence of past segmentation class-confidences sp, the past and future vehicle odometry op, of (x = {sp, op, of}) and produces the class-confidences at the next time-step as output. The additional conditioning on vehicle odometry is because the sequences are recorded in frame of reference of a moving vehicle and therefore the future observed sequence is dependent upon the vehicle trajectory. We use recursion to efficiently predict a sequence of future scene segmentations y = {sf}. The discriminator takes as input sf and classifies whether it was produced by our model or is from the true data distribution.
In detail, generative model architecture consists of a fully convolutional encoder-decoder pair. This architecture builds upon prior work of Luc et al. (2017); Jin et al. (2017), however with key differences. In Luc et al. (2017), each of the two levels of the model architecture consists of only five convolutional layers. In contrast, our model consists of one level with five convolutaional blocks. The encoder contains three residual blocks with max-pooling in between and the decoder consists of a residual and a convolua-
tional block with up-sampling in between. We double the size of the blocks following max-pooling in order to preserve resolution. This leads to a much deeper model with fifteen convolutional layers, with constant spatial convolutional kernel sizes. This deep model with pooling creates a wide receptive field and helps better capture spatio-temporal dependencies. The residual connections help in the optimization of such a deep model. Computational resources allowing, it is possible to add more levels to our model. In Jin et al. (2017) a model is considered which uses a Res101-FCN as an encoder. Although this model has significantly more layers, it also introduces a large amount of pooling. This leads to loss of resolution and spatial information, hence degrading performance. Our discriminator model consists of six convolutional layers with max-pooling layers in-between, followed by two fully connected layers. Finally, in Appendix E we provide layer-wise details and discuss the reduction of number of models in q(ω) through the use of Weight Dropout (4) for our architecture of generators.
4 EXPERIMENTS
Next, we evaluate our approach on MNIST digit generation and street scene anticipation on Cityscapes. We further evaluate our model on 2D data (Figure 1) and precipitation forecasting in the Appendix.
4.1 MNIST DIGIT GENERATION
Here, we aim to generate the full MNIST digit given only the lower left quarter of the digit. This task serves as an ideal starting point as in many cases there are multiple likely completions given the lower left quarter digit, e.g. 5 and 3. Therefore, the learned model distribution q(ω) should contain likely models corresponding to these completions. We use a fully connected generator with 6000-4000-2000 hidden units with 50% dropout probability. The discriminator has 1000-1000 hidden units with leaky ReLU non-linearities. We set β = 10−4 for the first 4 epochs and then reduce it to 0, to provide stability during the initial epochs. We compare our synthetic likelihood based approach (Bayes-SL) with, 1. A non-Bayesian mean model, 2. A standard Bayesian approach (Bayes-S), 3. A Conditional Variational Autoencoder (CVAE) (architecture as in Sohn et al. (2015)). As evaluation metric we consider (oracle) Top-k% accuracy (Lee et al., 2017). We use a standard Alex-Net based classifier to measure if the best prediction corresponds to the ground-truth class – identifies the correct mode – in Table 3 (right) over 10 splits of the MNIST test-set. We sample 10 models from our learned distribution and consider the best model. We see that our Bayes-SL performs best, even outperforming the CVAE model. In the qualitative examples in Table 3 (left), we see that generations from models ω̂ ∼ q(ω) sampled from our learned model distribution corresponds to clearly defined digits (also in comparision to Figure 3 in Sohn et al. (2015)). In contrast, we see that the Bayes-S model produces blurry digits. All sampled models have been pushed to the mean and shows little advantage over a mean model.
4.2 CITYSCAPES STREET SCENE ANTICIPATION
Next, we evaluate our apporach on the Cityscapes dataset – anticipating scenes more than 0.5 seconds into the future. The street scenes already display considerable multi-modality at this time-horizon.
Evaluation metrics and baselines. We use PSPNet Zhao et al. (2017) to segment the full training sequences as only the 20th frame has groundtruth annotations. We always use the annotated 20th frame of the validation sequences for evaluation using the standard mean Intersection-over-Union (mIoU) and the per-pixel (negative) conditional log-likelihood (CLL) metrics. We consider the following baselines for comparison to our Resnet based (architecture in Figure 2) Bayesian (BayesWD-SL) model with weight dropout and trained using synthetic likelihoods: 1. Copying the last seen input; 2. A non-Bayesian (ResG-Mean) version; 3. A Bayesian version with standard patch dropout (Bayes-S); 4. A Bayesian version with our weight dropout (Bayes-WD). Note that, combination of ResG-Mean with an adversarial loss did not lead to improved results (similar observations made in Luc et al. (2017)). We use grid search to set the dropout rate (in (4)) to 0.15 for the Bayes-S and 0.20 for Bayes-WD(-SL) models. We set α, β = 1 for our Bayes-WD-SL model. We train all models using Adam (Kingma & Ba, 2015) for 50 epochs with batch size 8. We use one sample to train the Bayesian methods as in Gal & Ghahramani (2016a) and use 100 samples during evaluation.
Comparison to state of the art. We begin by comparing our Bayesian models to state-of-the-art methods Luc et al. (2017); Seyed et al. (2018) in Table 1. We use the mIoU metric and for a fair comparison consider the mean (of all samples) prediction of our Bayesian models. We alwyas compare to the groundtruth segmentations of the validation set. However, as all three methods use a slightly different semantic segmentation algorithm (Table 2) to generate training and input test data, we include the mIoU achieved by the Last Input of all three methods (see Appendix C for results
Table 1: Comparing mean predictions to the state-of-the-art.
Timestep
Method +0.06sec +0.18sec +0.54sec
Last Input (Luc et al. (2017)) x 49.4 36.9 Luc et al. (2017) (ft) x 59.4 47.8 Last Input (Seyed et al. (2018)) 62.6 51.0 x Seyed et al. (2018) 71.3 60.0 x Last Input (Ours) 67.1 52.1 38.3 Bayes-S (mean) 71.2 64.8 45.7 Bayes-WD (mean) 73.7 63.5 44.0 Bayes-WD-SL (mean) 74.1 64.8 45.9 Bayes-WD-SL (ft, mean) x 65.1 51.2 Bayes-WD-SL (top 5%) 75.3 65.2 49.5 Bayes-WD-SL (ft, top 5%) x 66.7 52.5
Table 2: Comparison of segmentation estimation methods on Cityscapes validation set.
Method mIoU
Dilation10 (Luc et al., 2017) 68.8 PSPNet (Seyed et al., 2018) 75.7 PSPNet (Ours) 76.9
using Dialation 10). Similar to Luc et al. (2017) we fine-tune (ft) to predict at 3 frame intervals for better performance at +0.54sec. Our Bayes-WD-SL model outperforms baselines and improves on prior work by 2.8 mIoU at +0.06sec and 4.8 mIoU/3.4 mIoU at +0.18sec/+0.54sec respectively. Our Bayes-WD-SL model also obtains higher relative gains in comparison to Luc et al. (2017) with respect the Last Input Baseline. These results validate our choice of model architecture and show that our novel approach clearly outperforms the state-of-the-art. The performance advantage of Bayes-WD-SL over Bayes-S shows that the ability to better model uncertainty does not come at the cost of lower mean performance. However, at larger time-steps as the future becomes increasingly uncertain, mean predictions (mean of all likely futures) drift further from the ground-truth. Therefore, next we evaluate the models on their (more important) ability to capture the uncertainty of the future.
Evaluation of predicted uncertainty. Next, we evaluate whether our Bayesian models are able to accurately capture uncertainity and deal with multi-modal futures, upto t + 10 frames (0.6 seconds) in Table 3. We consider the mean of (oracle) best 5% of predictions (Lee et al. (2017)) of our Bayesian models to evaluate whether the learned model distribution q(ω) contains likely models corresponding to the groundtruth. We see that the best predictions considerably improve over the mean predictions – showing that our Bayesian models learns to capture uncertainity and deal with multi-modal futures. Quantitatively, we see that the Bayes-S model performs worst, demonstrating again that standard dropout (Kendall & Gal, 2017) struggles to recover the true model uncertainity. The use of weight dropout improves the performance to the level of the ResG-Mean model. Finally, we see that our Bayes-WD-SL model performs best. In fact, it is the only Bayesian model whose (best) performance exceeds that of the ResG-Mean model (also outperforming state-of-the-art), demonstrating the effectiveness of synthetic likelihoods during training. In Figure 5 we show examples comparing the best prediction of our Bayes-WD-SL model and ResG-Mean at t + 9. The last row highlights the differences between the predictions – cyan shows areas where our Bayes-WD-SL is correct and ResG-Mean is wrong, red shows the opposite. We see that our Bayes-WD-SL performs better at classes like cars and pedestrians which are harder to predict (also in comparison to Table 5 in Luc et al. (2017)). In Figure 6, we show samples from randomly sampled models ω̂ ∼ q(ω), which shows correspondence to the range of possible movements of bicyclists/pedestrians. Next, we further evaluate the models with the CLL metric in Table 3. We consider the mean predictive distributions (3) up to t + 10 frames. We see that the Bayesian models outperform the ResG-Mean model significantly. In particular, we see that our Bayes-WD-SL model performs the best, demonstrating that the learned model and observation uncertainty corresponds to the variation in the data.
Comparison to a CVAE baseline. As there exists no CVAE (Sohn et al., 2015) based model for future segmentation prediction, we construct a baseline as close as possible to our Bayesian models
based on existing CVAE based models for related tasks (Babaeizadeh et al., 2018; Xue et al., 2016). Existing CVAE based models (Babaeizadeh et al., 2018; Xue et al., 2016) contain a few layers with Gaussian input noise. Therefore, for a fair comparison we first conduct a study in Table 4 to find the layers which are most effective at capturing data variation. We consider Gaussian input noise applied in the first, middle or last convolutional blocks. The noise is input dependent during training, sampled from a recognition network (see Appendix). We observe that noise in the last layers can better capture data variation. This is because the last layers capture semantically higher level scene features. Overall, our Bayesian approach (Bayes-WD-SL) performs the best. This shows that the CVAE model is not able to effectively leverage Gaussian noise to match the data variation.
Uncertainty calibration. We further evaluate predicted uncertainties by measuring their calibration – the correspondence between the predicted probability of a class and the frequency of its occurrence in the data. As in Kendall & Gal (2017), we discretize the output probabilities of the mean predicted distribution into bins and measure the frequency of correct predictions for each bin. We report the results at t + 10 frames in Figure 4. We observe that all Bayesian approaches outperform the ResG-Mean and CVAE versions. This again demonstrates the effectiveness of the Bayesian approaches in capturing uncertainty.
5 CONCLUSION
We propose a novel approach for predicting real-world semantic segmentations into the future that casts a convolutional deep learning approach into a Bayesian formulation. One of the key contributions is a novel optimization scheme that uses synthetic likelihoods to encourage diversity and deal with multi-modal futures. Our proposed method shows state of the art performance in challenging street scenes. More importantly, we show that the probabilistic output of our deep learning architecture captures uncertainty and multi-modality inherent to this task. Furthermore, we show that the developed methodology goes beyond just street scene anticipation and creates new opportunities to enhance high performance deep learning architectures with principled formulations of Bayesian inference.
APPENDIX A. DETAILED DERIVATIONS.
KL divergence estimate. Here, we provide a detailed derivation of (8). Starting from (7), we have,
KL(q(ω|X,Y) || p(ω|X,Y)) ∝KL(q(ω) || p(ω))− ∫ q(ω) log p(Y|X, ω)dω.
=KL(q(ω) || p(ω))− ∫ q(ω) ( ∫ log p(y|x, ω )d(x, y) ) dω. ( over i.i.d (x, y) ∈ (X,Y) ) =KL(q(ω) || p(ω))− ∫ ( ∫ q(ω) log p(y|x, ω )dω ) d(x, y). (S1)
Multiplying and dividing by p(y|x), the true probability of occurance, =KL(q(ω) || p(ω))− ∫ (∫ q(ω) ( log
p(y|x, ω) p(y|x)
+ log p(y|x) ) dω ) d(x, y). (S2)
Using ∫ q(ω) dω = 1,
=KL(q(ω) || p(ω))− ∫ (∫
q(ω) log p(y|x, ω) p(y|x)
dω + log p(y|x) ) d(x, y).
=KL(q(ω) || p(ω))− ∫ ∫
q(ω) log p(y|x, ω) p(y|x)
dω d(x, y)− ∫ log p(y|x)d(x, y). (S3)
As ∫ log p(y|x)d(x, y) is independent of ω, the variables we are optmizing over, we have,
∝KL(q(ω) || p(ω))− ∫ ∫
q(ω) log p(y|x, ω) p(y|x) dω d(x, y). (S4)
APPENDIX B. RESULTS ON SIMPLE MULTI-MODAL 2D DATA.
We show results on simple multi-modal 2d data as in the motivating example in the introduction. The data consists of two parts: x ∈ [−10, 0] we have y = 0 and x ∈ [0, 10] we have y = (−0.3, 0.3). The set of models under consideration is a two hidden layer neural network with 256-128 neurons with 50% dropout. We show 10 randomly sampled models from ω̂ ∼ q(ω) learned by the Bayes-S approach in Figure 7 and our Bayes-SL approach in Figure 8 (with α = 1, β = 0). We assume constant observation uncertainty (=1). We clearly see that our Bayes-SL learns models which cover both modes, while all the models learned by Bayes-S fit to the mean. Clearly showing that our approach can better capture model uncertainty.
APPENDIX C. ADDITIONAL DETAILS AND EVALUATION ON STREET SCENES.
First, we provide additional training details of our Bayes-WD-SL in Table 5.
Second, we provide additional evaluation on street scenes. In Section 4.2 (Table 1) we use a PSPNet to generate training segmentations for our Bayes-WD-SL model to ensure fair comparison with the state-of-the-art (Seyed et al., 2018). However, the method of Luc et al. (2017) uses a weaker Dialation 10 approach to generate training segmentations. Note that our Bayes-WD-SL model already obtains higher gains in comparison to Luc et al. (2017) with respect the Last Input Baseline, e.g. at +0.54sec, 47.8 - 36.9 = 10.9 mIoU translating to 29.5% gain over the Last Input Baseline of Luc et al. (2017) versus 51.2 - 38.3 = 12.9 mIoU translating to 33.6% gain over the Last Input Baseline of our Bayes-WD-SL model in Table 1. But for fairness, here we additionally include results in Table 6 using the same Dialation 10 approach to generate training segmentations.
We observe that our Bayes-WD-SL model beats the model of Luc et al. (2017) in both short-term (+0.18 sec) and long-term predictions (+0.54 sec). Furthermore, we see that the mean of the Top 5% of the predictions of Bayes-WD-SL leads to much improved results over mean predictions. This again confirms the ability of our Bayes-WD-SL model to capture uncertainty and deal with multi-modal futures.
APPENDIX D. RESULTS ON HKO PRECIPITATION FORECASTING DATA.
The HKO radar echo dataset consists of weather radar intensity images. We use the train/test split used in Xingjian et al. (2015); Bhattacharyya et al. (2018b). Each sequence consists of 20 frames. We use 5 frames as input and 15 for prediction. Each frame is recorded at an interval of 6 minutes. Therefore, they display considerable uncertainty. We use the same network architecture as used for street scene segmentation Bayes-WD-SL (Figure 2 and with α = 5, β = 1), but with half the convolutional filters at each level. We compare to the following baselines: 1. A deterministic model (ResG-Mean), 2. A Bayesian model with weight dropout. We report the (oracle) Top-10% scores (best 1 of 10), over the following metrics (Xingjian et al., 2015; Bhattacharyya et al., 2018b), 1. Rainfall-MSE: Rainfall mean squared error, 2. CSI: Critical success index, 3. FAR: False alarm rate, 4. POD: Probability of detection, and 5. Correlation, in Table 7,
Note, that Xingjian et al. (2015); Bhattacharyya et al. (2018b) reports only scores over mean of all samples. Our ResG-Mean model outperforms these state of the art methods, showing the versatility of our model architecture. Our Bayes-WD-SL can outperform the strong ResG-Mean baseline again showing that it learns to capture uncertainty (see Figure 10). In comparison, the Bayes-WD baseline struggles to outperform the ResG-Mean baseline.
We further compare the calibration our Bayes-SL model to the ResG-Mean model in Figure 9. We plot the predicted intensity to the true mean observed intensity. The difference to ResG-Mean model
is stark in the high intensity region. The RegG-Mean model deviates strongly from the diagonal in this region – it overestimates the radar intensity. In comparison, we see that our Bayes-WD-SL approach stays closer to the diagonal. These results again show that our synthetic likelihood based approach leads to more accurate predictions while not compromising on calibration.
APPENDIX E. ADDITIONAL ARCHITECTURE DETAILS.
Here, we provide layer-wise details of our generative and discriminative models in Table 8 and Table 9. We provide layer-wise details of the recognition network of the CVAE baseline used in Table 4 (in the main paper). Finally, in Table 11 we show the difference in the number of possible
models using our weight based variational distribution 4 (weight dropout) versus the patch based variational distribution (patch dropout) proposed in Gal & Ghahramani (2016a). The number of patches is calculated using the formula,
Input Resolution × # Output Convolutional Filters,
because we use convolutional stride 1, padding to ensure same output resolution and each patch is dropped out (in Gal & Ghahramani (2016a)) independently for each convolutional filter. The number of weight parameters is given by the formula,
Filter size × # Input Convolutional Filters × # Output Convolutional Filters + # Bias.
Table 11 shows that our weight dropout scheme results in significantly lower number of parameters compared to patch dropout Gal & Ghahramani (2016a).
Details of our discriminator model. We show the layer wise details in Table 9.
Details of the recognition model used in the CVAE baseline. We show the layer wise details in Table 10.
Layer Type Size Activation Input Output
In1 Input x, y Conv1,1 Conv1,1 Conv2D 128 ReLU In1 Conv1,2 Conv1,2 Conv2D 128 ReLU Conv1,1 MaxPool1 MaxPool1 Max Pooling 2×2 Conv1,2 Conv2,1 Conv2,1 Conv2D 256 ReLU MaxPool1 Conv2,2 Conv2,2 Conv2D 256 ReLU Conv2,1 MaxPool2 MaxPool2 Max Pooling 2×2 Conv2,2 Conv3,1 Conv3,1 Conv2D 512 ReLU MaxPool2 MaxPool2 MaxPool3 Max Pooling 2×2 Conv3,1 Conv4,1 Conv4,1 Conv2D 512 ReLU MaxPool3 MaxPool4 MaxPool4 Max Pooling 2×2 Conv4,1 Flatten Flatten MaxPool4 Dense1 Dense1 Fully Connected 1024 ReLU Flatten Dense2 Dense2 Fully Connected 1024 ReLU Dense1 Out
Out Fully Connected - Dense2
Table 9: Details our discriminator model. The final output Out provides the synthetic likelihoods
( D(x,ŷ) 1−D(x,ŷ) ) .
Layer Type Size Activation Input Output
In1 Input x, y Conv1,1 Conv1,1 Conv2D 128 ReLU In1 Conv1,2 Conv1,2 Conv2D 128 ReLU Conv1,1 MaxPool1 MaxPool1 Max Pooling 2×2 Conv1,2 Conv2,1 Conv2,1 Conv2D 128 ReLU MaxPool1 Conv2,2 Conv2,2 Conv2D 128 ReLU Conv2,1 MaxPool2 MaxPool2 Max Pooling 2×2 Conv2,2 Conv3,1 Conv3,1 Conv2D 128 ReLU MaxPool2 Conv4,1 Conv4,1 Conv2D 128 ReLU Conv3,1 UpSamp1 UpSamp1 Up Sampling 2×2 Conv4,1 Conv5,1 Conv5,1 Conv2D 128 ReLU UpSamp1 UpSamp2 UpSamp2 Up Sampling 2×2 Conv3,2 Conv4,1 Conv6,1 Conv2D 32 UpSamp2 z1 Conv6,2 Conv2D 32 UpSamp2 z2 Conv6,3 Conv2D 32 UpSamp2 z3
Table 10: Details the recognition model used in the CVAE baseline. The final outputs are the Gaussian Noise tensors z1, z2, z3. | 1. What are the key contributions of the paper, particularly regarding Bayesian neural networks and GANs?
2. How does the proposed approach reduce complexity compared to previous methods?
3. What are the limitations of the experimental validation provided in the paper?
4. Are there any uncertainties or confusions regarding the presentation of the proposed method and its components?
5. How does the reviewer assess the overall quality and impact of the paper? | Review | Review
The work proposes a Bayesian neural network model that is a hybrid between autoencoders and GANs, although it is not presented like that. Specifically, the paper starts from a Bayesian Neural Network model, as presented in Gal and Ghahramani, 2016 and makes two modifications.
First, it proposes to define one Bernoulli variational distribution per weight kernel, instead of per patch (in the original work there was one Bernoulli distribution per patch kernel). As the paper claims, this reduces the complexity to be exponential to the number of weights, instead of the number of patches, which leads to a much smaller number of possible models. Also, because of this modification the same variational distributions are shared between locations, being closer to the convolutional nature of the model.
The second modification is the introduction of synthetic likelihoods. Specifically, in the original network the variational distributions are designed such that the KL-divergence of the true posterior p(ω|X, y) and the approximate posterior q(ω) is minimiezd. This leads to the optimizer encouraging the final model to be close to the mean, thus resulting in less diversity. By re-formulating the KL-divergence, the final objective can be written such that it depends on the likelihood ratio between generated/"fake" samples and "true" data samples. This ratio can then be approximated by a GAN-like discriminator. As the optimizer now is forced to care about the ratio instead of individual samples, the model is more diverse.
Both modifications present some interesting ideas. Specifically, the number of variational parameters is reduced, thus the final models could be much better scaleable. Also, using synthetic likelihoods in a Bayesian context is novel, to the best of my knowledge, and does seem to be somewhat empirically justified.
The negative points of the paper are the following.
- The precise novelty of the first modification is not clearly explained. Indeed, the number of possible models with the proposed approach is reduced. However, what is the degree to which it is reduced. With some rough calculations, for an input image of resolution 224x224, with a kernel size of 3x3 and stride 1, there should be about 90x90 patches. That is roughly a complexity of O(N^2) ~ 8K (N is the number of patches). Consider the proposed variational distributions with 512 outputting channels, this amount to 3x3x512 ~ 4.5K. So, is the advantage mostly when the spatial resolution of the image is very high? What about intermediate layers, where the resolution is typically smaller?
- Although seemingly ok, the experimental validation has some unclarities.
+ First, it is not clear whether it is fair in the MNIST experiment to report results only from the best sampled model, especially considering that the difference from the CVAE baseline is only 0.5%. The standard deviation should also be reported.
+ In Table 2 it is not clear what is compared against what. There are three different variants of the proposed model. The WD-SL does exactly on par with the Bayes-Standard (although for some reason the boldface font is used only for the proposed method. The improvement appears to come from the synthetic likelihoods. Then, there is another "fine-tuned" variant for which only a single time step is reported, namely +0.54 sec. Why not report numbers for all three future time steps? Then, the fine-tuned version (WD-SL-ft) is clearly better than the best baselines of Luc et al., however, the segmentation networks are also quite different (about 7% difference in mIoU), so it is not clear if the improvement really comes from the synhetic likelihoods or from the better segmentation network. In short, the only configuration that appears to be convincing as is is for the 0.06 sec. I would ask the authors to fill in the blank X spots and repeat fair experiments with the baseline.
- Generally, although the paper is ok written, there are several unclarities.
+ Z_K in eq. (4) is not defined, although I guess it's the matrix of the z^{i, j}_{k, k'}
+ In eq (6) is the z x σ a matrix or a scalar operation? Is z a matrix or a scalar?
+ The whole section 3.4 is confusing and it feels as if it is there to fill up space. There is a rather intricate architecture, but it is not clear where it is used. In the first experment a simple fully connected network is used. In the second experiment a ResNet is used. So, where the section 3.4 model used?
+ In the first experiment a fully connected network is used, although the first novelty is about convolutions. I suppose the convolutions are not used here? If not, is that a fair experiment to outline the contributions of the method?
+ It is not clear why considering the mean of the best 5% predictions helps with evaluating the predicted uncertainty? I understand that this follows by the citation, but still an explanation is needed.
All in all, there are some interesting ideas, however, clarifications are required before considering acceptance. |
ICLR | Title
Bayesian Prediction of Future Street Scenes using Synthetic Likelihoods
Abstract
For autonomous agents to successfully operate in the real world, the ability to anticipate future scene states is a key competence. In real-world scenarios, future states become increasingly uncertain and multi-modal, particularly on long time horizons. Dropout based Bayesian inference provides a computationally tractable, theoretically well grounded approach to learn likely hypotheses/models to deal with uncertain futures and make predictions that correspond well to observations – are well calibrated. However, it turns out that such approaches fall short to capture complex real-world scenes, even falling behind in accuracy when compared to the plain deterministic approaches. This is because the used log-likelihood estimate discourages diversity. In this work, we propose a novel Bayesian formulation for anticipating future scene states which leverages synthetic likelihoods that encourage the learning of diverse models to accurately capture the multi-modal nature of future scene states. We show that our approach achieves accurate state-of-the-art predictions and calibrated probabilities through extensive experiments for scene anticipation on Cityscapes dataset. Moreover, we show that our approach generalizes across diverse tasks such as digit generation and precipitation forecasting.
N/A
1 INTRODUCTION
The ability to anticipate future scene states which involves mapping one scene state to likely future states under uncertainty is key for autonomous agents to successfully operate in the real world e.g., to anticipate the movements of pedestrians and vehicles for autonomous vehicles. The future states of street scenes are inherently uncertain and the distribution of outcomes is often multi-modal. This is especially true for important classes like pedestrians. Recent works on anticipating street scenes (Luc et al., 2017; Jin et al., 2017; Seyed et al., 2018) do not systematically consider uncertainty.
Bayesian inference provides a theoretically well founded approach to capture both model and observation uncertainty but with considerable computational overhead. A recently proposed approach (Gal & Ghahramani, 2016b; Kendall & Gal, 2017) uses dropout to represent the posterior distribution of models and capture model uncertainty. This approach has enabled Bayesian inference with deep neural networks without additional computational overhead. Moreover, it allows the use of any existing deep neural network architecture with minor changes.
However, when the underlying data distribution is multimodal and the model set under consideration do not have explicit latent state/variables (as most popular deep deep neural network architectures), the approach of Gal & Ghahramani (2016b); Kendall & Gal (2017) is unable to recover the true model uncertainty (see Figure 1 and Osband (2016)). This is because this approach is known to conflate risk and uncertainty (Osband, 2016). This limits the accuracy of the models over a plain deterministic (non-Bayesian) approach. The main cause is the data log-likelihood maximization step
during optimization – for every data point the average likelihood assigned by all models is maximized. This forces every model to explain every data point well, pushing every model in the distribution to the mean. We address this problem through an objective leveraging synthetic likelihoods (Wood, 2010; Rosca et al., 2017) which relaxes the constraint on every model to explain every data point, thus encouraging diversity in the learned models to deal with multi-modality.
In this work: 1. We develop the first Bayesian approach to anticipate the multi-modal future of street scenes and demonstrate state-of-the-art accuracy on the diverse Cityscapes dataset without compromising on calibrated probabilities, 2. We propose a novel optimization scheme for dropout based Bayesian inference using synthetic likelihoods to encourage diversity and accurately capture model uncertainty, 3. Finally, we show that our approach is not limited to street scenes and generalizes across diverse tasks such as digit generation and precipitation forecasting.
2 RELATED WORK
Bayesian deep learning. Most popular deep learning models do not model uncertainty, only a mean model is learned. Bayesian methods (MacKay, 1992; Neal, 2012) on the other hand learn the posterior distribution of likely models. However, inference of the model posterior is computationally expensive. In (Gal & Ghahramani, 2016b) this problem is tackled using variational inference with an approximate Bernoulli distribution on the weights and the equivalence to dropout training is shown. This method is further extended to convolutional neural networks in (Gal & Ghahramani, 2016a). In (Kendall & Gal, 2017) this method is extended to tackle both model and observation uncertainty through heteroscedastic regression. The proposed method achieves state of the art results on segmentation estimation and depth regression tasks. This framework is used in Bhattacharyya et al. (2018a) to estimate future pedestrian trajectories. In contrast, Saatci & Wilson (2017) propose a (unconditional) Bayesian GAN framework for image generation using Hamiltonian Monte-Carlo based optimization with limited success. Moreover, conditional variants of GANs (Mirza & Osindero, 2014) are known to be especially prone to mode collapse. Therefore, we choose a dropout based Bayesian scheme and improve upon it through the use of synthetic likelihoods to tackle the issues with model uncertainty mentioned in the introduction.
Structured output prediction. Stochastic feedforward neural networks (SFNN) and conditional variational autoencoders (CVAE) have also shown success in modeling multimodal conditional distributions. SFNNs are difficult to optimize on large datasets (Tang & Salakhutdinov, 2013) due to the binary stochastic variables. Although there has been significant effort in improving training efficiency (Rezende et al., 2014; Gu et al., 2016), success has been partial. In contrast, CVAEs (Sohn et al., 2015) assume Gaussian stochastic variables, which are easier to optimize on large datasets using the re-parameterization trick. CVAEs have been successfully applied on a large variety of tasks, include conditional image generation (Bao et al., 2017), next frame synthesis (Xue et al., 2016), video generation (Babaeizadeh et al., 2018; Denton & Fergus, 2018), trajectory prediction (Lee et al., 2017) among others. The basic CVAE framework is improved upon in (Bhattacharyya et al., 2018b) through the use of a multiple-sample objective. However, in comparison to Bayesian methods, careful architecture selection is required and experimental evidence of uncertainty calibration is missing. Calibrated uncertainties are important for autonomous/assisted driving, as users need to be able to express trust in the predictions for effective decision making. Therefore, we also adopt a Bayesian approach over SFNN or CVAE approaches.
Anticipation future scene scenes. In (Luc et al., 2017) the first method for predicting future scene segmentations has been proposed. Their model is fully convolutional with prediction at multiple scales and is trained auto-regressively. Jin et al. (2017) improves upon this through the joint prediction of future scene segmentation and optical flow. Similar to Luc et al. (2017) a fully convolutional model is proposed, but the proposed model is based on the Resnet-101 (He et al., 2016) and has a single prediction scale. More recently, Luc et al. (2018) has extended the model of Luc et al. (2017) to the related task of future instance segmentation prediction. These methods achieve promising results and establish the competence of fully convolutional models. In (Seyed et al., 2018) a Convolutional LSTM based model is proposed, further improving short-term results over Jin et al. (2017). However, fully convolutional architectures have performed well at a variety of related tasks, including segmentation estimation (Yu & Koltun, 2016; Zhao et al., 2017), RGB frame prediction
(Mathieu et al., 2016; Babaeizadeh et al., 2018) among others. Therefore, we adopt a standard ResNet based fully-convolutional architecture, while providing a full Bayesian treatment.
3 BAYESIAN MODELS FOR PREDICTION UNDER UNCERTAINTY
We phrase our models in a Bayesian framework, to jointly capture model (epistemic) and observation (aleatoric) uncertainty (Kendall & Gal, 2017). We begin with model uncertainty.
3.1 MODEL UNCERTAINTY
Let x ∈ X be the input (past) and y ∈ Y be the corresponding outcomes. Consider f : x 7→ y, we capture model uncertainty by learning the distribution p(f |X,Y) of generative models f , likely to have generated our data {X,Y}. The complete predictive distribution of outcomes y is obtained by marginalizing over the posterior distribution,
p(y|x,X,Y) = ∫ p(y|x, f)p(f |X,Y)df . (1)
However, the integral in (1) is intractable. But, we can approximate it in two steps (Gal & Ghahramani, 2016b). First, we assume that our models can be described by a finite set of variables ω. Thus, we constrain the set of possible models to ones that can be described with ω. Now, (1) is equivalently,
p(y|x,X,Y) = ∫ p(y|x, ω)p(ω|X,Y)dω . (2)
Second, we assume an approximating variational distribution q(ω) of models which allows for efficient sampling. This results in the approximate distribution,
p(y|x,X,Y) ≈ p(y|x) = ∫ p(y|x, ω)q(ω)dω . (3)
For convolutional models, Gal & Ghahramani (2016a) proposed a Bernoulli variational distribution defined over each convolutional patch. The number of possible models is exponential in the number of patches. This number could be very large, making it difficult optimize over this very large set of models. In contrast, in our approach (4), the number possible models is exponential in the number of weight parameters, a much smaller number. In detail, we choose the set of convolutional kernels and the biases {(W1, b1), . . . , (WL, bL)} ∈ W of our model as the set of variables ω. Then, we define the following novel approximating Bernoulli variational distribution q(ω) independently over each element wi,jk′,k (correspondingly bk) of the kernels and the biases at spatial locations {i, j},
q(WK) =MK ZK zi,jk′,k = Bernoulli(pK), k ′ = 1, . . . , |K ′|, k = 1, . . . , |K| . (4)
Note, denotes the hadamard product, Mk are tuneable variational parameters, zi,jk′,k ∈ ZK are the independent Bernoulli variables, pK is a probability tensor equal to the size of the (bias) layer, |K| (|K ′|) is the number of kernels in the current (previous) layer. Here, pK is chosen manually. Moreover, in contrast to Gal & Ghahramani (2016a), the same (sampled) kernel is applied at each spatial location leading to the detection of the same features at varying spatial locations. Next, we describe how we capture observation uncertainty.
3.2 OBSERVATION UNCERTAINTY
Observation uncertainty can be captured by assuming an appropriate distribution of observation noise and predicting the sufficient statistics of the distribution (Kendall & Gal, 2017). Here, we assume a Gaussian distribution with diagonal covariance matrix at each pixel and predict the mean vector µi,j and co-variance matrix σi,j of the distribution. In detail, the predictive distribution of a generative model draw from ω̂ ∼ q(ω) at a pixel position {i, j} is,
pi,j(y|x, ω̂) = N ( (µi,j |x, ω̂), (σi,j |x, ω̂) ) . (5)
We can sample from the predictive distribution p(y|x) (3) by first sampling the weight matrices ω from (4) and then sampling from the Gaussian distribution in (5). We perform the last step by the linear transformation of a zero mean unit diagonal variance Gaussian, ensuring differentiability,
ŷi,j ∼ µi,j(x|ω̂) + z × σi,j(x|ω̂), where p(z) is N (0, I) and ω̂ ∼ q(ω) . (6)
where, ŷi,j is the sample drawn at a pixel position {i, j} through the liner transformation of z (a vector) with the predicted mean µi,j and variance σi,j . In case of street scenes, yi,j is a class-confidence vector and sample of final class probabilities is obtained by pushing ŷi,j through a softmax.
3.3 TRAINING
For a good variational approximation (3), our approximating variational distribution of generative models q(ω) should be close to the true posterior p(ω|X,Y). Therefore, we minimize the KL divergence between these two distributions. As shown in Gal & Ghahramani (2016b;a); Kendall & Gal (2017) the KL divergence is given by (over i.i.d data points),
KL(q(ω) || p(ω|X,Y)) ∝KL(q(ω) || p(ω))− ∫ q(ω) log p(Y|X, ω)dω
=KL(q(ω) || p(ω))− ∫ q(ω) ( ∫ log p(y|x, ω)d(x, y) ) dω.
=KL(q(ω) || p(ω))− ∫ ( ∫ q(ω) log p(y|x, ω)dω ) d(x, y).
(7)
The log-likelihood term at the right of (7) considers every model for every data point. This imposes the constraint that every data point must be explained well by every model. However, if the data distribution (x, y) is multi-modal, this would push every model to the mean of the multi-modal distribution (as in Figure 1 where only way for models to explain both modes is to converge to the mean). This discourages diversity in the learned modes. In case of multi-modal data, we would not be able to recover all likely models, thus hindering our ability to fully capture model uncertainty. The models would be forced to explain the data variation as observation noise (Osband, 2016), thus conflating model and observation uncertainty. We propose to mitigate this problem through the use of an approximate objective using synthetic likelihoods (Wood, 2010; Rosca et al., 2017) – obtained from a classifier. The classifier estimates the likelihood based on whether the models ω̂ ∼ q(ω) explain (generate) data samples likely under the true data distribution p(y|x). This removes the constraint on models to explain every data point – it only requires the explained (generated) data points to be likely under the data distribution. Thus, this allows models ω̂ ∼ q(ω) to be diverse and deal with multi-modality. Next, we reformulate the KL divergence estimate of (7) to a likelihood ratio form which allows us to use a classifier to estimate (synthetic) likelihoods, (also see Appendix),
=KL(q(ω) || p(ω))− ∫ ( ∫ q(ω) log p(y|x, ω)dω ) d(x, y).
=KL(q(ω) || p(ω))− ∫ (∫ q(ω) ( log
p(y|x, ω) p(y|x)
+ log p(y|x) ) dω ) d(x, y).
∝KL(q(ω) || p(ω))− ∫ ∫
q(ω) log p(y|x, ω) p(y|x) dω d(x, y).
(8)
In the second step of (8), we divide and multiply the probability assigned to a data sample by a model p(y|x, ω) by the true conditional probability p(y|x) to obtain a likelihood ratio. We can estimate the KL divergence by equivalently estimating this ratio rather than the true likelihood. In order to (synthetically) estimate this likelihood ratio, let us introduce the variable θ to denote, p(y|x, θ = 1) the probability assigned by our model ω to a data sample (x, y) and p(y|x, θ = 0) the true probability of the sample. Therefore, the ratio in the last term of (8) is,
=KL(q(ω) || p(ω))− ∫ ∫
q(ω) log p(y|x, θ = 1) p(y|x, θ = 0) dω d(x, y).
=KL(q(ω) || p(ω))− ∫ ∫
q(ω) log p(θ = 1|x, y) p(θ = 0|x, y) dω d(x, y). (Using Bayes theorem)
=KL(q(ω) || p(ω))− ∫ ∫ q(ω) log p(θ = 1|x, y)
1− p(θ = 1|x, y) dω d(x, y).
(9)
In the last step of (9) we use the fact that the events θ = 1 and θ = 0 are mutually exclusive. We can approximate the ratio p(θ=1|x,y)1−p(θ=1|x,y) by jointly learning a discriminator D(x, ŷ) that can distinguish between samples of the true data distribution and samples (x, ŷ) generated by the model ω, which provides a synthetic estimate of the likelihood, and equivalently integrating directly over (x, ŷ),
≈KL(q(ω) || p(ω))− ∫ ∫ q(ω) log ( D(x, ŷ) 1−D(x, ŷ) ) dω d(x, ŷ). (10)
Note that the synthetic likelihood ( D(x,ŷ) 1−D(x,ŷ) ) is independent of any specific pair (x, y) of the true data distribution (unlike the log-likelihood term in (7)), its value depends only upon whether the generated data point (x, ŷ) by the model ω is likely under the true data distribution p(y|x). Therefore, the models ω have to only generate samples (x, ŷ) likely under the true data distribution. The models need not explain every data point equally well. Therefore, we do not push the models ω to the mean, thus allowing them to be diverse and allowing us to better capture uncertainty.
Empirically, we observe that a hybrid log-likelihood term using both the log-likelihood terms of (10) and (7) with regularization parameters α and β (with α ≥ β) stabilizes the training process,
α ∫ ∫ q(ω) log ( D(x, ŷ) 1−D(x, ŷ) ) dω d(x, ŷ) + β ∫ ∫ q(ω) log p(y|x, ω)dω d(x, y). (11)
Note that, although we do not explicitly require the posterior model distribution to explain all data points, due to the exponential number of models afforded by dropout and the joint optimization (min-max game) of the discriminator, empirically we see very diverse models explaining most data points. Moreover, empirically we also see that predicted probabilities remain calibrated. Next, we describe the architecture details of our generative models ω and the discriminator D(x, ŷ).
3.4 MODEL ARCHITECTURE FOR STREET SCENE PREDICTION
The architecture of our ResNet based generative models in our model distribution q(ω) is shown in Figure 2. The generative model takes as input a sequence of past segmentation class-confidences sp, the past and future vehicle odometry op, of (x = {sp, op, of}) and produces the class-confidences at the next time-step as output. The additional conditioning on vehicle odometry is because the sequences are recorded in frame of reference of a moving vehicle and therefore the future observed sequence is dependent upon the vehicle trajectory. We use recursion to efficiently predict a sequence of future scene segmentations y = {sf}. The discriminator takes as input sf and classifies whether it was produced by our model or is from the true data distribution.
In detail, generative model architecture consists of a fully convolutional encoder-decoder pair. This architecture builds upon prior work of Luc et al. (2017); Jin et al. (2017), however with key differences. In Luc et al. (2017), each of the two levels of the model architecture consists of only five convolutional layers. In contrast, our model consists of one level with five convolutaional blocks. The encoder contains three residual blocks with max-pooling in between and the decoder consists of a residual and a convolua-
tional block with up-sampling in between. We double the size of the blocks following max-pooling in order to preserve resolution. This leads to a much deeper model with fifteen convolutional layers, with constant spatial convolutional kernel sizes. This deep model with pooling creates a wide receptive field and helps better capture spatio-temporal dependencies. The residual connections help in the optimization of such a deep model. Computational resources allowing, it is possible to add more levels to our model. In Jin et al. (2017) a model is considered which uses a Res101-FCN as an encoder. Although this model has significantly more layers, it also introduces a large amount of pooling. This leads to loss of resolution and spatial information, hence degrading performance. Our discriminator model consists of six convolutional layers with max-pooling layers in-between, followed by two fully connected layers. Finally, in Appendix E we provide layer-wise details and discuss the reduction of number of models in q(ω) through the use of Weight Dropout (4) for our architecture of generators.
4 EXPERIMENTS
Next, we evaluate our approach on MNIST digit generation and street scene anticipation on Cityscapes. We further evaluate our model on 2D data (Figure 1) and precipitation forecasting in the Appendix.
4.1 MNIST DIGIT GENERATION
Here, we aim to generate the full MNIST digit given only the lower left quarter of the digit. This task serves as an ideal starting point as in many cases there are multiple likely completions given the lower left quarter digit, e.g. 5 and 3. Therefore, the learned model distribution q(ω) should contain likely models corresponding to these completions. We use a fully connected generator with 6000-4000-2000 hidden units with 50% dropout probability. The discriminator has 1000-1000 hidden units with leaky ReLU non-linearities. We set β = 10−4 for the first 4 epochs and then reduce it to 0, to provide stability during the initial epochs. We compare our synthetic likelihood based approach (Bayes-SL) with, 1. A non-Bayesian mean model, 2. A standard Bayesian approach (Bayes-S), 3. A Conditional Variational Autoencoder (CVAE) (architecture as in Sohn et al. (2015)). As evaluation metric we consider (oracle) Top-k% accuracy (Lee et al., 2017). We use a standard Alex-Net based classifier to measure if the best prediction corresponds to the ground-truth class – identifies the correct mode – in Table 3 (right) over 10 splits of the MNIST test-set. We sample 10 models from our learned distribution and consider the best model. We see that our Bayes-SL performs best, even outperforming the CVAE model. In the qualitative examples in Table 3 (left), we see that generations from models ω̂ ∼ q(ω) sampled from our learned model distribution corresponds to clearly defined digits (also in comparision to Figure 3 in Sohn et al. (2015)). In contrast, we see that the Bayes-S model produces blurry digits. All sampled models have been pushed to the mean and shows little advantage over a mean model.
4.2 CITYSCAPES STREET SCENE ANTICIPATION
Next, we evaluate our apporach on the Cityscapes dataset – anticipating scenes more than 0.5 seconds into the future. The street scenes already display considerable multi-modality at this time-horizon.
Evaluation metrics and baselines. We use PSPNet Zhao et al. (2017) to segment the full training sequences as only the 20th frame has groundtruth annotations. We always use the annotated 20th frame of the validation sequences for evaluation using the standard mean Intersection-over-Union (mIoU) and the per-pixel (negative) conditional log-likelihood (CLL) metrics. We consider the following baselines for comparison to our Resnet based (architecture in Figure 2) Bayesian (BayesWD-SL) model with weight dropout and trained using synthetic likelihoods: 1. Copying the last seen input; 2. A non-Bayesian (ResG-Mean) version; 3. A Bayesian version with standard patch dropout (Bayes-S); 4. A Bayesian version with our weight dropout (Bayes-WD). Note that, combination of ResG-Mean with an adversarial loss did not lead to improved results (similar observations made in Luc et al. (2017)). We use grid search to set the dropout rate (in (4)) to 0.15 for the Bayes-S and 0.20 for Bayes-WD(-SL) models. We set α, β = 1 for our Bayes-WD-SL model. We train all models using Adam (Kingma & Ba, 2015) for 50 epochs with batch size 8. We use one sample to train the Bayesian methods as in Gal & Ghahramani (2016a) and use 100 samples during evaluation.
Comparison to state of the art. We begin by comparing our Bayesian models to state-of-the-art methods Luc et al. (2017); Seyed et al. (2018) in Table 1. We use the mIoU metric and for a fair comparison consider the mean (of all samples) prediction of our Bayesian models. We alwyas compare to the groundtruth segmentations of the validation set. However, as all three methods use a slightly different semantic segmentation algorithm (Table 2) to generate training and input test data, we include the mIoU achieved by the Last Input of all three methods (see Appendix C for results
Table 1: Comparing mean predictions to the state-of-the-art.
Timestep
Method +0.06sec +0.18sec +0.54sec
Last Input (Luc et al. (2017)) x 49.4 36.9 Luc et al. (2017) (ft) x 59.4 47.8 Last Input (Seyed et al. (2018)) 62.6 51.0 x Seyed et al. (2018) 71.3 60.0 x Last Input (Ours) 67.1 52.1 38.3 Bayes-S (mean) 71.2 64.8 45.7 Bayes-WD (mean) 73.7 63.5 44.0 Bayes-WD-SL (mean) 74.1 64.8 45.9 Bayes-WD-SL (ft, mean) x 65.1 51.2 Bayes-WD-SL (top 5%) 75.3 65.2 49.5 Bayes-WD-SL (ft, top 5%) x 66.7 52.5
Table 2: Comparison of segmentation estimation methods on Cityscapes validation set.
Method mIoU
Dilation10 (Luc et al., 2017) 68.8 PSPNet (Seyed et al., 2018) 75.7 PSPNet (Ours) 76.9
using Dialation 10). Similar to Luc et al. (2017) we fine-tune (ft) to predict at 3 frame intervals for better performance at +0.54sec. Our Bayes-WD-SL model outperforms baselines and improves on prior work by 2.8 mIoU at +0.06sec and 4.8 mIoU/3.4 mIoU at +0.18sec/+0.54sec respectively. Our Bayes-WD-SL model also obtains higher relative gains in comparison to Luc et al. (2017) with respect the Last Input Baseline. These results validate our choice of model architecture and show that our novel approach clearly outperforms the state-of-the-art. The performance advantage of Bayes-WD-SL over Bayes-S shows that the ability to better model uncertainty does not come at the cost of lower mean performance. However, at larger time-steps as the future becomes increasingly uncertain, mean predictions (mean of all likely futures) drift further from the ground-truth. Therefore, next we evaluate the models on their (more important) ability to capture the uncertainty of the future.
Evaluation of predicted uncertainty. Next, we evaluate whether our Bayesian models are able to accurately capture uncertainity and deal with multi-modal futures, upto t + 10 frames (0.6 seconds) in Table 3. We consider the mean of (oracle) best 5% of predictions (Lee et al. (2017)) of our Bayesian models to evaluate whether the learned model distribution q(ω) contains likely models corresponding to the groundtruth. We see that the best predictions considerably improve over the mean predictions – showing that our Bayesian models learns to capture uncertainity and deal with multi-modal futures. Quantitatively, we see that the Bayes-S model performs worst, demonstrating again that standard dropout (Kendall & Gal, 2017) struggles to recover the true model uncertainity. The use of weight dropout improves the performance to the level of the ResG-Mean model. Finally, we see that our Bayes-WD-SL model performs best. In fact, it is the only Bayesian model whose (best) performance exceeds that of the ResG-Mean model (also outperforming state-of-the-art), demonstrating the effectiveness of synthetic likelihoods during training. In Figure 5 we show examples comparing the best prediction of our Bayes-WD-SL model and ResG-Mean at t + 9. The last row highlights the differences between the predictions – cyan shows areas where our Bayes-WD-SL is correct and ResG-Mean is wrong, red shows the opposite. We see that our Bayes-WD-SL performs better at classes like cars and pedestrians which are harder to predict (also in comparison to Table 5 in Luc et al. (2017)). In Figure 6, we show samples from randomly sampled models ω̂ ∼ q(ω), which shows correspondence to the range of possible movements of bicyclists/pedestrians. Next, we further evaluate the models with the CLL metric in Table 3. We consider the mean predictive distributions (3) up to t + 10 frames. We see that the Bayesian models outperform the ResG-Mean model significantly. In particular, we see that our Bayes-WD-SL model performs the best, demonstrating that the learned model and observation uncertainty corresponds to the variation in the data.
Comparison to a CVAE baseline. As there exists no CVAE (Sohn et al., 2015) based model for future segmentation prediction, we construct a baseline as close as possible to our Bayesian models
based on existing CVAE based models for related tasks (Babaeizadeh et al., 2018; Xue et al., 2016). Existing CVAE based models (Babaeizadeh et al., 2018; Xue et al., 2016) contain a few layers with Gaussian input noise. Therefore, for a fair comparison we first conduct a study in Table 4 to find the layers which are most effective at capturing data variation. We consider Gaussian input noise applied in the first, middle or last convolutional blocks. The noise is input dependent during training, sampled from a recognition network (see Appendix). We observe that noise in the last layers can better capture data variation. This is because the last layers capture semantically higher level scene features. Overall, our Bayesian approach (Bayes-WD-SL) performs the best. This shows that the CVAE model is not able to effectively leverage Gaussian noise to match the data variation.
Uncertainty calibration. We further evaluate predicted uncertainties by measuring their calibration – the correspondence between the predicted probability of a class and the frequency of its occurrence in the data. As in Kendall & Gal (2017), we discretize the output probabilities of the mean predicted distribution into bins and measure the frequency of correct predictions for each bin. We report the results at t + 10 frames in Figure 4. We observe that all Bayesian approaches outperform the ResG-Mean and CVAE versions. This again demonstrates the effectiveness of the Bayesian approaches in capturing uncertainty.
5 CONCLUSION
We propose a novel approach for predicting real-world semantic segmentations into the future that casts a convolutional deep learning approach into a Bayesian formulation. One of the key contributions is a novel optimization scheme that uses synthetic likelihoods to encourage diversity and deal with multi-modal futures. Our proposed method shows state of the art performance in challenging street scenes. More importantly, we show that the probabilistic output of our deep learning architecture captures uncertainty and multi-modality inherent to this task. Furthermore, we show that the developed methodology goes beyond just street scene anticipation and creates new opportunities to enhance high performance deep learning architectures with principled formulations of Bayesian inference.
APPENDIX A. DETAILED DERIVATIONS.
KL divergence estimate. Here, we provide a detailed derivation of (8). Starting from (7), we have,
KL(q(ω|X,Y) || p(ω|X,Y)) ∝KL(q(ω) || p(ω))− ∫ q(ω) log p(Y|X, ω)dω.
=KL(q(ω) || p(ω))− ∫ q(ω) ( ∫ log p(y|x, ω )d(x, y) ) dω. ( over i.i.d (x, y) ∈ (X,Y) ) =KL(q(ω) || p(ω))− ∫ ( ∫ q(ω) log p(y|x, ω )dω ) d(x, y). (S1)
Multiplying and dividing by p(y|x), the true probability of occurance, =KL(q(ω) || p(ω))− ∫ (∫ q(ω) ( log
p(y|x, ω) p(y|x)
+ log p(y|x) ) dω ) d(x, y). (S2)
Using ∫ q(ω) dω = 1,
=KL(q(ω) || p(ω))− ∫ (∫
q(ω) log p(y|x, ω) p(y|x)
dω + log p(y|x) ) d(x, y).
=KL(q(ω) || p(ω))− ∫ ∫
q(ω) log p(y|x, ω) p(y|x)
dω d(x, y)− ∫ log p(y|x)d(x, y). (S3)
As ∫ log p(y|x)d(x, y) is independent of ω, the variables we are optmizing over, we have,
∝KL(q(ω) || p(ω))− ∫ ∫
q(ω) log p(y|x, ω) p(y|x) dω d(x, y). (S4)
APPENDIX B. RESULTS ON SIMPLE MULTI-MODAL 2D DATA.
We show results on simple multi-modal 2d data as in the motivating example in the introduction. The data consists of two parts: x ∈ [−10, 0] we have y = 0 and x ∈ [0, 10] we have y = (−0.3, 0.3). The set of models under consideration is a two hidden layer neural network with 256-128 neurons with 50% dropout. We show 10 randomly sampled models from ω̂ ∼ q(ω) learned by the Bayes-S approach in Figure 7 and our Bayes-SL approach in Figure 8 (with α = 1, β = 0). We assume constant observation uncertainty (=1). We clearly see that our Bayes-SL learns models which cover both modes, while all the models learned by Bayes-S fit to the mean. Clearly showing that our approach can better capture model uncertainty.
APPENDIX C. ADDITIONAL DETAILS AND EVALUATION ON STREET SCENES.
First, we provide additional training details of our Bayes-WD-SL in Table 5.
Second, we provide additional evaluation on street scenes. In Section 4.2 (Table 1) we use a PSPNet to generate training segmentations for our Bayes-WD-SL model to ensure fair comparison with the state-of-the-art (Seyed et al., 2018). However, the method of Luc et al. (2017) uses a weaker Dialation 10 approach to generate training segmentations. Note that our Bayes-WD-SL model already obtains higher gains in comparison to Luc et al. (2017) with respect the Last Input Baseline, e.g. at +0.54sec, 47.8 - 36.9 = 10.9 mIoU translating to 29.5% gain over the Last Input Baseline of Luc et al. (2017) versus 51.2 - 38.3 = 12.9 mIoU translating to 33.6% gain over the Last Input Baseline of our Bayes-WD-SL model in Table 1. But for fairness, here we additionally include results in Table 6 using the same Dialation 10 approach to generate training segmentations.
We observe that our Bayes-WD-SL model beats the model of Luc et al. (2017) in both short-term (+0.18 sec) and long-term predictions (+0.54 sec). Furthermore, we see that the mean of the Top 5% of the predictions of Bayes-WD-SL leads to much improved results over mean predictions. This again confirms the ability of our Bayes-WD-SL model to capture uncertainty and deal with multi-modal futures.
APPENDIX D. RESULTS ON HKO PRECIPITATION FORECASTING DATA.
The HKO radar echo dataset consists of weather radar intensity images. We use the train/test split used in Xingjian et al. (2015); Bhattacharyya et al. (2018b). Each sequence consists of 20 frames. We use 5 frames as input and 15 for prediction. Each frame is recorded at an interval of 6 minutes. Therefore, they display considerable uncertainty. We use the same network architecture as used for street scene segmentation Bayes-WD-SL (Figure 2 and with α = 5, β = 1), but with half the convolutional filters at each level. We compare to the following baselines: 1. A deterministic model (ResG-Mean), 2. A Bayesian model with weight dropout. We report the (oracle) Top-10% scores (best 1 of 10), over the following metrics (Xingjian et al., 2015; Bhattacharyya et al., 2018b), 1. Rainfall-MSE: Rainfall mean squared error, 2. CSI: Critical success index, 3. FAR: False alarm rate, 4. POD: Probability of detection, and 5. Correlation, in Table 7,
Note, that Xingjian et al. (2015); Bhattacharyya et al. (2018b) reports only scores over mean of all samples. Our ResG-Mean model outperforms these state of the art methods, showing the versatility of our model architecture. Our Bayes-WD-SL can outperform the strong ResG-Mean baseline again showing that it learns to capture uncertainty (see Figure 10). In comparison, the Bayes-WD baseline struggles to outperform the ResG-Mean baseline.
We further compare the calibration our Bayes-SL model to the ResG-Mean model in Figure 9. We plot the predicted intensity to the true mean observed intensity. The difference to ResG-Mean model
is stark in the high intensity region. The RegG-Mean model deviates strongly from the diagonal in this region – it overestimates the radar intensity. In comparison, we see that our Bayes-WD-SL approach stays closer to the diagonal. These results again show that our synthetic likelihood based approach leads to more accurate predictions while not compromising on calibration.
APPENDIX E. ADDITIONAL ARCHITECTURE DETAILS.
Here, we provide layer-wise details of our generative and discriminative models in Table 8 and Table 9. We provide layer-wise details of the recognition network of the CVAE baseline used in Table 4 (in the main paper). Finally, in Table 11 we show the difference in the number of possible
models using our weight based variational distribution 4 (weight dropout) versus the patch based variational distribution (patch dropout) proposed in Gal & Ghahramani (2016a). The number of patches is calculated using the formula,
Input Resolution × # Output Convolutional Filters,
because we use convolutional stride 1, padding to ensure same output resolution and each patch is dropped out (in Gal & Ghahramani (2016a)) independently for each convolutional filter. The number of weight parameters is given by the formula,
Filter size × # Input Convolutional Filters × # Output Convolutional Filters + # Bias.
Table 11 shows that our weight dropout scheme results in significantly lower number of parameters compared to patch dropout Gal & Ghahramani (2016a).
Details of our discriminator model. We show the layer wise details in Table 9.
Details of the recognition model used in the CVAE baseline. We show the layer wise details in Table 10.
Layer Type Size Activation Input Output
In1 Input x, y Conv1,1 Conv1,1 Conv2D 128 ReLU In1 Conv1,2 Conv1,2 Conv2D 128 ReLU Conv1,1 MaxPool1 MaxPool1 Max Pooling 2×2 Conv1,2 Conv2,1 Conv2,1 Conv2D 256 ReLU MaxPool1 Conv2,2 Conv2,2 Conv2D 256 ReLU Conv2,1 MaxPool2 MaxPool2 Max Pooling 2×2 Conv2,2 Conv3,1 Conv3,1 Conv2D 512 ReLU MaxPool2 MaxPool2 MaxPool3 Max Pooling 2×2 Conv3,1 Conv4,1 Conv4,1 Conv2D 512 ReLU MaxPool3 MaxPool4 MaxPool4 Max Pooling 2×2 Conv4,1 Flatten Flatten MaxPool4 Dense1 Dense1 Fully Connected 1024 ReLU Flatten Dense2 Dense2 Fully Connected 1024 ReLU Dense1 Out
Out Fully Connected - Dense2
Table 9: Details our discriminator model. The final output Out provides the synthetic likelihoods
( D(x,ŷ) 1−D(x,ŷ) ) .
Layer Type Size Activation Input Output
In1 Input x, y Conv1,1 Conv1,1 Conv2D 128 ReLU In1 Conv1,2 Conv1,2 Conv2D 128 ReLU Conv1,1 MaxPool1 MaxPool1 Max Pooling 2×2 Conv1,2 Conv2,1 Conv2,1 Conv2D 128 ReLU MaxPool1 Conv2,2 Conv2,2 Conv2D 128 ReLU Conv2,1 MaxPool2 MaxPool2 Max Pooling 2×2 Conv2,2 Conv3,1 Conv3,1 Conv2D 128 ReLU MaxPool2 Conv4,1 Conv4,1 Conv2D 128 ReLU Conv3,1 UpSamp1 UpSamp1 Up Sampling 2×2 Conv4,1 Conv5,1 Conv5,1 Conv2D 128 ReLU UpSamp1 UpSamp2 UpSamp2 Up Sampling 2×2 Conv3,2 Conv4,1 Conv6,1 Conv2D 32 UpSamp2 z1 Conv6,2 Conv2D 32 UpSamp2 z2 Conv6,3 Conv2D 32 UpSamp2 z3
Table 10: Details the recognition model used in the CVAE baseline. The final outputs are the Gaussian Noise tensors z1, z2, z3. | 1. What is the main contribution of the paper regarding Bayesian inference approach?
2. How does the proposed method improve upon the state of the art in capturing multi-modality?
3. What are the strengths of the paper in terms of clarity and descriptions?
4. Do you have any questions or concerns regarding the training process, input features, and figure explanations?
5. Are there any typos or grammatical errors in the paper that need correction? | Review | Review
The submission considers a disadvantage of a standard dropout-based Bayesian inference approach, namely the pessimization of model uncertainty by means of maximizing the average likelihood for every data sample. The formulation by Gal & Ghahramani is improved upon two-fold: via simplified modeling of the approximating variational distribution (on kernel/bias instead of on patch level), and by using a discriminator (i.e. classifier) for providing a "synthetic" likelihood estimate. The latter relaxes the assumptions such that not every data sample needs to be explained equally well by the models.
Results are demonstrated on a variety of tasks, most prominently street scene forecasting, but also digit completion and precipitation forecasting. The proposed method improves upon the state of the art, while more strongly capturing multi-modality than previous methods.
To the best of my knowledge, this is the first work w.r.t. future prediction with a principled treatment of uncertainty. I find the contributions significant, well described, and the intuition behind them is conveyed convincingly. The experiments in Section 4 (and appendix) yield convincing results on a range of problems.
Clarity of the submission is overall good; Sections 3.1-3.3 treat the contributions in sufficient detail. Descriptions of both generator and discriminator for street scenes (Section 3.4) are sufficiently clear, although I would like to see a more detailed description of the training process (how many iterations for each, learning rate, etc?) for better reproducability.
In Section 3.4, it is not completely clear to me why the future vehicle odometry is provided as an input, in addition to past odometry and past segmentation confidences. I assume this would not be present in a real-world scenario? I also have to admit that I fail to understand Figure 4; at least I cannot see any truly significant differences, unless I heavily zoom in on screen.
Small notes:
- Is the 'y' on the right side of Equation (5) a typo? (should this be 'x'?)
- The second to last sentence at the bottom of page 6 ("Always the comparison...") suffers from weird grammar |
ICLR | Title
Bayesian Prediction of Future Street Scenes using Synthetic Likelihoods
Abstract
For autonomous agents to successfully operate in the real world, the ability to anticipate future scene states is a key competence. In real-world scenarios, future states become increasingly uncertain and multi-modal, particularly on long time horizons. Dropout based Bayesian inference provides a computationally tractable, theoretically well grounded approach to learn likely hypotheses/models to deal with uncertain futures and make predictions that correspond well to observations – are well calibrated. However, it turns out that such approaches fall short to capture complex real-world scenes, even falling behind in accuracy when compared to the plain deterministic approaches. This is because the used log-likelihood estimate discourages diversity. In this work, we propose a novel Bayesian formulation for anticipating future scene states which leverages synthetic likelihoods that encourage the learning of diverse models to accurately capture the multi-modal nature of future scene states. We show that our approach achieves accurate state-of-the-art predictions and calibrated probabilities through extensive experiments for scene anticipation on Cityscapes dataset. Moreover, we show that our approach generalizes across diverse tasks such as digit generation and precipitation forecasting.
N/A
1 INTRODUCTION
The ability to anticipate future scene states which involves mapping one scene state to likely future states under uncertainty is key for autonomous agents to successfully operate in the real world e.g., to anticipate the movements of pedestrians and vehicles for autonomous vehicles. The future states of street scenes are inherently uncertain and the distribution of outcomes is often multi-modal. This is especially true for important classes like pedestrians. Recent works on anticipating street scenes (Luc et al., 2017; Jin et al., 2017; Seyed et al., 2018) do not systematically consider uncertainty.
Bayesian inference provides a theoretically well founded approach to capture both model and observation uncertainty but with considerable computational overhead. A recently proposed approach (Gal & Ghahramani, 2016b; Kendall & Gal, 2017) uses dropout to represent the posterior distribution of models and capture model uncertainty. This approach has enabled Bayesian inference with deep neural networks without additional computational overhead. Moreover, it allows the use of any existing deep neural network architecture with minor changes.
However, when the underlying data distribution is multimodal and the model set under consideration do not have explicit latent state/variables (as most popular deep deep neural network architectures), the approach of Gal & Ghahramani (2016b); Kendall & Gal (2017) is unable to recover the true model uncertainty (see Figure 1 and Osband (2016)). This is because this approach is known to conflate risk and uncertainty (Osband, 2016). This limits the accuracy of the models over a plain deterministic (non-Bayesian) approach. The main cause is the data log-likelihood maximization step
during optimization – for every data point the average likelihood assigned by all models is maximized. This forces every model to explain every data point well, pushing every model in the distribution to the mean. We address this problem through an objective leveraging synthetic likelihoods (Wood, 2010; Rosca et al., 2017) which relaxes the constraint on every model to explain every data point, thus encouraging diversity in the learned models to deal with multi-modality.
In this work: 1. We develop the first Bayesian approach to anticipate the multi-modal future of street scenes and demonstrate state-of-the-art accuracy on the diverse Cityscapes dataset without compromising on calibrated probabilities, 2. We propose a novel optimization scheme for dropout based Bayesian inference using synthetic likelihoods to encourage diversity and accurately capture model uncertainty, 3. Finally, we show that our approach is not limited to street scenes and generalizes across diverse tasks such as digit generation and precipitation forecasting.
2 RELATED WORK
Bayesian deep learning. Most popular deep learning models do not model uncertainty, only a mean model is learned. Bayesian methods (MacKay, 1992; Neal, 2012) on the other hand learn the posterior distribution of likely models. However, inference of the model posterior is computationally expensive. In (Gal & Ghahramani, 2016b) this problem is tackled using variational inference with an approximate Bernoulli distribution on the weights and the equivalence to dropout training is shown. This method is further extended to convolutional neural networks in (Gal & Ghahramani, 2016a). In (Kendall & Gal, 2017) this method is extended to tackle both model and observation uncertainty through heteroscedastic regression. The proposed method achieves state of the art results on segmentation estimation and depth regression tasks. This framework is used in Bhattacharyya et al. (2018a) to estimate future pedestrian trajectories. In contrast, Saatci & Wilson (2017) propose a (unconditional) Bayesian GAN framework for image generation using Hamiltonian Monte-Carlo based optimization with limited success. Moreover, conditional variants of GANs (Mirza & Osindero, 2014) are known to be especially prone to mode collapse. Therefore, we choose a dropout based Bayesian scheme and improve upon it through the use of synthetic likelihoods to tackle the issues with model uncertainty mentioned in the introduction.
Structured output prediction. Stochastic feedforward neural networks (SFNN) and conditional variational autoencoders (CVAE) have also shown success in modeling multimodal conditional distributions. SFNNs are difficult to optimize on large datasets (Tang & Salakhutdinov, 2013) due to the binary stochastic variables. Although there has been significant effort in improving training efficiency (Rezende et al., 2014; Gu et al., 2016), success has been partial. In contrast, CVAEs (Sohn et al., 2015) assume Gaussian stochastic variables, which are easier to optimize on large datasets using the re-parameterization trick. CVAEs have been successfully applied on a large variety of tasks, include conditional image generation (Bao et al., 2017), next frame synthesis (Xue et al., 2016), video generation (Babaeizadeh et al., 2018; Denton & Fergus, 2018), trajectory prediction (Lee et al., 2017) among others. The basic CVAE framework is improved upon in (Bhattacharyya et al., 2018b) through the use of a multiple-sample objective. However, in comparison to Bayesian methods, careful architecture selection is required and experimental evidence of uncertainty calibration is missing. Calibrated uncertainties are important for autonomous/assisted driving, as users need to be able to express trust in the predictions for effective decision making. Therefore, we also adopt a Bayesian approach over SFNN or CVAE approaches.
Anticipation future scene scenes. In (Luc et al., 2017) the first method for predicting future scene segmentations has been proposed. Their model is fully convolutional with prediction at multiple scales and is trained auto-regressively. Jin et al. (2017) improves upon this through the joint prediction of future scene segmentation and optical flow. Similar to Luc et al. (2017) a fully convolutional model is proposed, but the proposed model is based on the Resnet-101 (He et al., 2016) and has a single prediction scale. More recently, Luc et al. (2018) has extended the model of Luc et al. (2017) to the related task of future instance segmentation prediction. These methods achieve promising results and establish the competence of fully convolutional models. In (Seyed et al., 2018) a Convolutional LSTM based model is proposed, further improving short-term results over Jin et al. (2017). However, fully convolutional architectures have performed well at a variety of related tasks, including segmentation estimation (Yu & Koltun, 2016; Zhao et al., 2017), RGB frame prediction
(Mathieu et al., 2016; Babaeizadeh et al., 2018) among others. Therefore, we adopt a standard ResNet based fully-convolutional architecture, while providing a full Bayesian treatment.
3 BAYESIAN MODELS FOR PREDICTION UNDER UNCERTAINTY
We phrase our models in a Bayesian framework, to jointly capture model (epistemic) and observation (aleatoric) uncertainty (Kendall & Gal, 2017). We begin with model uncertainty.
3.1 MODEL UNCERTAINTY
Let x ∈ X be the input (past) and y ∈ Y be the corresponding outcomes. Consider f : x 7→ y, we capture model uncertainty by learning the distribution p(f |X,Y) of generative models f , likely to have generated our data {X,Y}. The complete predictive distribution of outcomes y is obtained by marginalizing over the posterior distribution,
p(y|x,X,Y) = ∫ p(y|x, f)p(f |X,Y)df . (1)
However, the integral in (1) is intractable. But, we can approximate it in two steps (Gal & Ghahramani, 2016b). First, we assume that our models can be described by a finite set of variables ω. Thus, we constrain the set of possible models to ones that can be described with ω. Now, (1) is equivalently,
p(y|x,X,Y) = ∫ p(y|x, ω)p(ω|X,Y)dω . (2)
Second, we assume an approximating variational distribution q(ω) of models which allows for efficient sampling. This results in the approximate distribution,
p(y|x,X,Y) ≈ p(y|x) = ∫ p(y|x, ω)q(ω)dω . (3)
For convolutional models, Gal & Ghahramani (2016a) proposed a Bernoulli variational distribution defined over each convolutional patch. The number of possible models is exponential in the number of patches. This number could be very large, making it difficult optimize over this very large set of models. In contrast, in our approach (4), the number possible models is exponential in the number of weight parameters, a much smaller number. In detail, we choose the set of convolutional kernels and the biases {(W1, b1), . . . , (WL, bL)} ∈ W of our model as the set of variables ω. Then, we define the following novel approximating Bernoulli variational distribution q(ω) independently over each element wi,jk′,k (correspondingly bk) of the kernels and the biases at spatial locations {i, j},
q(WK) =MK ZK zi,jk′,k = Bernoulli(pK), k ′ = 1, . . . , |K ′|, k = 1, . . . , |K| . (4)
Note, denotes the hadamard product, Mk are tuneable variational parameters, zi,jk′,k ∈ ZK are the independent Bernoulli variables, pK is a probability tensor equal to the size of the (bias) layer, |K| (|K ′|) is the number of kernels in the current (previous) layer. Here, pK is chosen manually. Moreover, in contrast to Gal & Ghahramani (2016a), the same (sampled) kernel is applied at each spatial location leading to the detection of the same features at varying spatial locations. Next, we describe how we capture observation uncertainty.
3.2 OBSERVATION UNCERTAINTY
Observation uncertainty can be captured by assuming an appropriate distribution of observation noise and predicting the sufficient statistics of the distribution (Kendall & Gal, 2017). Here, we assume a Gaussian distribution with diagonal covariance matrix at each pixel and predict the mean vector µi,j and co-variance matrix σi,j of the distribution. In detail, the predictive distribution of a generative model draw from ω̂ ∼ q(ω) at a pixel position {i, j} is,
pi,j(y|x, ω̂) = N ( (µi,j |x, ω̂), (σi,j |x, ω̂) ) . (5)
We can sample from the predictive distribution p(y|x) (3) by first sampling the weight matrices ω from (4) and then sampling from the Gaussian distribution in (5). We perform the last step by the linear transformation of a zero mean unit diagonal variance Gaussian, ensuring differentiability,
ŷi,j ∼ µi,j(x|ω̂) + z × σi,j(x|ω̂), where p(z) is N (0, I) and ω̂ ∼ q(ω) . (6)
where, ŷi,j is the sample drawn at a pixel position {i, j} through the liner transformation of z (a vector) with the predicted mean µi,j and variance σi,j . In case of street scenes, yi,j is a class-confidence vector and sample of final class probabilities is obtained by pushing ŷi,j through a softmax.
3.3 TRAINING
For a good variational approximation (3), our approximating variational distribution of generative models q(ω) should be close to the true posterior p(ω|X,Y). Therefore, we minimize the KL divergence between these two distributions. As shown in Gal & Ghahramani (2016b;a); Kendall & Gal (2017) the KL divergence is given by (over i.i.d data points),
KL(q(ω) || p(ω|X,Y)) ∝KL(q(ω) || p(ω))− ∫ q(ω) log p(Y|X, ω)dω
=KL(q(ω) || p(ω))− ∫ q(ω) ( ∫ log p(y|x, ω)d(x, y) ) dω.
=KL(q(ω) || p(ω))− ∫ ( ∫ q(ω) log p(y|x, ω)dω ) d(x, y).
(7)
The log-likelihood term at the right of (7) considers every model for every data point. This imposes the constraint that every data point must be explained well by every model. However, if the data distribution (x, y) is multi-modal, this would push every model to the mean of the multi-modal distribution (as in Figure 1 where only way for models to explain both modes is to converge to the mean). This discourages diversity in the learned modes. In case of multi-modal data, we would not be able to recover all likely models, thus hindering our ability to fully capture model uncertainty. The models would be forced to explain the data variation as observation noise (Osband, 2016), thus conflating model and observation uncertainty. We propose to mitigate this problem through the use of an approximate objective using synthetic likelihoods (Wood, 2010; Rosca et al., 2017) – obtained from a classifier. The classifier estimates the likelihood based on whether the models ω̂ ∼ q(ω) explain (generate) data samples likely under the true data distribution p(y|x). This removes the constraint on models to explain every data point – it only requires the explained (generated) data points to be likely under the data distribution. Thus, this allows models ω̂ ∼ q(ω) to be diverse and deal with multi-modality. Next, we reformulate the KL divergence estimate of (7) to a likelihood ratio form which allows us to use a classifier to estimate (synthetic) likelihoods, (also see Appendix),
=KL(q(ω) || p(ω))− ∫ ( ∫ q(ω) log p(y|x, ω)dω ) d(x, y).
=KL(q(ω) || p(ω))− ∫ (∫ q(ω) ( log
p(y|x, ω) p(y|x)
+ log p(y|x) ) dω ) d(x, y).
∝KL(q(ω) || p(ω))− ∫ ∫
q(ω) log p(y|x, ω) p(y|x) dω d(x, y).
(8)
In the second step of (8), we divide and multiply the probability assigned to a data sample by a model p(y|x, ω) by the true conditional probability p(y|x) to obtain a likelihood ratio. We can estimate the KL divergence by equivalently estimating this ratio rather than the true likelihood. In order to (synthetically) estimate this likelihood ratio, let us introduce the variable θ to denote, p(y|x, θ = 1) the probability assigned by our model ω to a data sample (x, y) and p(y|x, θ = 0) the true probability of the sample. Therefore, the ratio in the last term of (8) is,
=KL(q(ω) || p(ω))− ∫ ∫
q(ω) log p(y|x, θ = 1) p(y|x, θ = 0) dω d(x, y).
=KL(q(ω) || p(ω))− ∫ ∫
q(ω) log p(θ = 1|x, y) p(θ = 0|x, y) dω d(x, y). (Using Bayes theorem)
=KL(q(ω) || p(ω))− ∫ ∫ q(ω) log p(θ = 1|x, y)
1− p(θ = 1|x, y) dω d(x, y).
(9)
In the last step of (9) we use the fact that the events θ = 1 and θ = 0 are mutually exclusive. We can approximate the ratio p(θ=1|x,y)1−p(θ=1|x,y) by jointly learning a discriminator D(x, ŷ) that can distinguish between samples of the true data distribution and samples (x, ŷ) generated by the model ω, which provides a synthetic estimate of the likelihood, and equivalently integrating directly over (x, ŷ),
≈KL(q(ω) || p(ω))− ∫ ∫ q(ω) log ( D(x, ŷ) 1−D(x, ŷ) ) dω d(x, ŷ). (10)
Note that the synthetic likelihood ( D(x,ŷ) 1−D(x,ŷ) ) is independent of any specific pair (x, y) of the true data distribution (unlike the log-likelihood term in (7)), its value depends only upon whether the generated data point (x, ŷ) by the model ω is likely under the true data distribution p(y|x). Therefore, the models ω have to only generate samples (x, ŷ) likely under the true data distribution. The models need not explain every data point equally well. Therefore, we do not push the models ω to the mean, thus allowing them to be diverse and allowing us to better capture uncertainty.
Empirically, we observe that a hybrid log-likelihood term using both the log-likelihood terms of (10) and (7) with regularization parameters α and β (with α ≥ β) stabilizes the training process,
α ∫ ∫ q(ω) log ( D(x, ŷ) 1−D(x, ŷ) ) dω d(x, ŷ) + β ∫ ∫ q(ω) log p(y|x, ω)dω d(x, y). (11)
Note that, although we do not explicitly require the posterior model distribution to explain all data points, due to the exponential number of models afforded by dropout and the joint optimization (min-max game) of the discriminator, empirically we see very diverse models explaining most data points. Moreover, empirically we also see that predicted probabilities remain calibrated. Next, we describe the architecture details of our generative models ω and the discriminator D(x, ŷ).
3.4 MODEL ARCHITECTURE FOR STREET SCENE PREDICTION
The architecture of our ResNet based generative models in our model distribution q(ω) is shown in Figure 2. The generative model takes as input a sequence of past segmentation class-confidences sp, the past and future vehicle odometry op, of (x = {sp, op, of}) and produces the class-confidences at the next time-step as output. The additional conditioning on vehicle odometry is because the sequences are recorded in frame of reference of a moving vehicle and therefore the future observed sequence is dependent upon the vehicle trajectory. We use recursion to efficiently predict a sequence of future scene segmentations y = {sf}. The discriminator takes as input sf and classifies whether it was produced by our model or is from the true data distribution.
In detail, generative model architecture consists of a fully convolutional encoder-decoder pair. This architecture builds upon prior work of Luc et al. (2017); Jin et al. (2017), however with key differences. In Luc et al. (2017), each of the two levels of the model architecture consists of only five convolutional layers. In contrast, our model consists of one level with five convolutaional blocks. The encoder contains three residual blocks with max-pooling in between and the decoder consists of a residual and a convolua-
tional block with up-sampling in between. We double the size of the blocks following max-pooling in order to preserve resolution. This leads to a much deeper model with fifteen convolutional layers, with constant spatial convolutional kernel sizes. This deep model with pooling creates a wide receptive field and helps better capture spatio-temporal dependencies. The residual connections help in the optimization of such a deep model. Computational resources allowing, it is possible to add more levels to our model. In Jin et al. (2017) a model is considered which uses a Res101-FCN as an encoder. Although this model has significantly more layers, it also introduces a large amount of pooling. This leads to loss of resolution and spatial information, hence degrading performance. Our discriminator model consists of six convolutional layers with max-pooling layers in-between, followed by two fully connected layers. Finally, in Appendix E we provide layer-wise details and discuss the reduction of number of models in q(ω) through the use of Weight Dropout (4) for our architecture of generators.
4 EXPERIMENTS
Next, we evaluate our approach on MNIST digit generation and street scene anticipation on Cityscapes. We further evaluate our model on 2D data (Figure 1) and precipitation forecasting in the Appendix.
4.1 MNIST DIGIT GENERATION
Here, we aim to generate the full MNIST digit given only the lower left quarter of the digit. This task serves as an ideal starting point as in many cases there are multiple likely completions given the lower left quarter digit, e.g. 5 and 3. Therefore, the learned model distribution q(ω) should contain likely models corresponding to these completions. We use a fully connected generator with 6000-4000-2000 hidden units with 50% dropout probability. The discriminator has 1000-1000 hidden units with leaky ReLU non-linearities. We set β = 10−4 for the first 4 epochs and then reduce it to 0, to provide stability during the initial epochs. We compare our synthetic likelihood based approach (Bayes-SL) with, 1. A non-Bayesian mean model, 2. A standard Bayesian approach (Bayes-S), 3. A Conditional Variational Autoencoder (CVAE) (architecture as in Sohn et al. (2015)). As evaluation metric we consider (oracle) Top-k% accuracy (Lee et al., 2017). We use a standard Alex-Net based classifier to measure if the best prediction corresponds to the ground-truth class – identifies the correct mode – in Table 3 (right) over 10 splits of the MNIST test-set. We sample 10 models from our learned distribution and consider the best model. We see that our Bayes-SL performs best, even outperforming the CVAE model. In the qualitative examples in Table 3 (left), we see that generations from models ω̂ ∼ q(ω) sampled from our learned model distribution corresponds to clearly defined digits (also in comparision to Figure 3 in Sohn et al. (2015)). In contrast, we see that the Bayes-S model produces blurry digits. All sampled models have been pushed to the mean and shows little advantage over a mean model.
4.2 CITYSCAPES STREET SCENE ANTICIPATION
Next, we evaluate our apporach on the Cityscapes dataset – anticipating scenes more than 0.5 seconds into the future. The street scenes already display considerable multi-modality at this time-horizon.
Evaluation metrics and baselines. We use PSPNet Zhao et al. (2017) to segment the full training sequences as only the 20th frame has groundtruth annotations. We always use the annotated 20th frame of the validation sequences for evaluation using the standard mean Intersection-over-Union (mIoU) and the per-pixel (negative) conditional log-likelihood (CLL) metrics. We consider the following baselines for comparison to our Resnet based (architecture in Figure 2) Bayesian (BayesWD-SL) model with weight dropout and trained using synthetic likelihoods: 1. Copying the last seen input; 2. A non-Bayesian (ResG-Mean) version; 3. A Bayesian version with standard patch dropout (Bayes-S); 4. A Bayesian version with our weight dropout (Bayes-WD). Note that, combination of ResG-Mean with an adversarial loss did not lead to improved results (similar observations made in Luc et al. (2017)). We use grid search to set the dropout rate (in (4)) to 0.15 for the Bayes-S and 0.20 for Bayes-WD(-SL) models. We set α, β = 1 for our Bayes-WD-SL model. We train all models using Adam (Kingma & Ba, 2015) for 50 epochs with batch size 8. We use one sample to train the Bayesian methods as in Gal & Ghahramani (2016a) and use 100 samples during evaluation.
Comparison to state of the art. We begin by comparing our Bayesian models to state-of-the-art methods Luc et al. (2017); Seyed et al. (2018) in Table 1. We use the mIoU metric and for a fair comparison consider the mean (of all samples) prediction of our Bayesian models. We alwyas compare to the groundtruth segmentations of the validation set. However, as all three methods use a slightly different semantic segmentation algorithm (Table 2) to generate training and input test data, we include the mIoU achieved by the Last Input of all three methods (see Appendix C for results
Table 1: Comparing mean predictions to the state-of-the-art.
Timestep
Method +0.06sec +0.18sec +0.54sec
Last Input (Luc et al. (2017)) x 49.4 36.9 Luc et al. (2017) (ft) x 59.4 47.8 Last Input (Seyed et al. (2018)) 62.6 51.0 x Seyed et al. (2018) 71.3 60.0 x Last Input (Ours) 67.1 52.1 38.3 Bayes-S (mean) 71.2 64.8 45.7 Bayes-WD (mean) 73.7 63.5 44.0 Bayes-WD-SL (mean) 74.1 64.8 45.9 Bayes-WD-SL (ft, mean) x 65.1 51.2 Bayes-WD-SL (top 5%) 75.3 65.2 49.5 Bayes-WD-SL (ft, top 5%) x 66.7 52.5
Table 2: Comparison of segmentation estimation methods on Cityscapes validation set.
Method mIoU
Dilation10 (Luc et al., 2017) 68.8 PSPNet (Seyed et al., 2018) 75.7 PSPNet (Ours) 76.9
using Dialation 10). Similar to Luc et al. (2017) we fine-tune (ft) to predict at 3 frame intervals for better performance at +0.54sec. Our Bayes-WD-SL model outperforms baselines and improves on prior work by 2.8 mIoU at +0.06sec and 4.8 mIoU/3.4 mIoU at +0.18sec/+0.54sec respectively. Our Bayes-WD-SL model also obtains higher relative gains in comparison to Luc et al. (2017) with respect the Last Input Baseline. These results validate our choice of model architecture and show that our novel approach clearly outperforms the state-of-the-art. The performance advantage of Bayes-WD-SL over Bayes-S shows that the ability to better model uncertainty does not come at the cost of lower mean performance. However, at larger time-steps as the future becomes increasingly uncertain, mean predictions (mean of all likely futures) drift further from the ground-truth. Therefore, next we evaluate the models on their (more important) ability to capture the uncertainty of the future.
Evaluation of predicted uncertainty. Next, we evaluate whether our Bayesian models are able to accurately capture uncertainity and deal with multi-modal futures, upto t + 10 frames (0.6 seconds) in Table 3. We consider the mean of (oracle) best 5% of predictions (Lee et al. (2017)) of our Bayesian models to evaluate whether the learned model distribution q(ω) contains likely models corresponding to the groundtruth. We see that the best predictions considerably improve over the mean predictions – showing that our Bayesian models learns to capture uncertainity and deal with multi-modal futures. Quantitatively, we see that the Bayes-S model performs worst, demonstrating again that standard dropout (Kendall & Gal, 2017) struggles to recover the true model uncertainity. The use of weight dropout improves the performance to the level of the ResG-Mean model. Finally, we see that our Bayes-WD-SL model performs best. In fact, it is the only Bayesian model whose (best) performance exceeds that of the ResG-Mean model (also outperforming state-of-the-art), demonstrating the effectiveness of synthetic likelihoods during training. In Figure 5 we show examples comparing the best prediction of our Bayes-WD-SL model and ResG-Mean at t + 9. The last row highlights the differences between the predictions – cyan shows areas where our Bayes-WD-SL is correct and ResG-Mean is wrong, red shows the opposite. We see that our Bayes-WD-SL performs better at classes like cars and pedestrians which are harder to predict (also in comparison to Table 5 in Luc et al. (2017)). In Figure 6, we show samples from randomly sampled models ω̂ ∼ q(ω), which shows correspondence to the range of possible movements of bicyclists/pedestrians. Next, we further evaluate the models with the CLL metric in Table 3. We consider the mean predictive distributions (3) up to t + 10 frames. We see that the Bayesian models outperform the ResG-Mean model significantly. In particular, we see that our Bayes-WD-SL model performs the best, demonstrating that the learned model and observation uncertainty corresponds to the variation in the data.
Comparison to a CVAE baseline. As there exists no CVAE (Sohn et al., 2015) based model for future segmentation prediction, we construct a baseline as close as possible to our Bayesian models
based on existing CVAE based models for related tasks (Babaeizadeh et al., 2018; Xue et al., 2016). Existing CVAE based models (Babaeizadeh et al., 2018; Xue et al., 2016) contain a few layers with Gaussian input noise. Therefore, for a fair comparison we first conduct a study in Table 4 to find the layers which are most effective at capturing data variation. We consider Gaussian input noise applied in the first, middle or last convolutional blocks. The noise is input dependent during training, sampled from a recognition network (see Appendix). We observe that noise in the last layers can better capture data variation. This is because the last layers capture semantically higher level scene features. Overall, our Bayesian approach (Bayes-WD-SL) performs the best. This shows that the CVAE model is not able to effectively leverage Gaussian noise to match the data variation.
Uncertainty calibration. We further evaluate predicted uncertainties by measuring their calibration – the correspondence between the predicted probability of a class and the frequency of its occurrence in the data. As in Kendall & Gal (2017), we discretize the output probabilities of the mean predicted distribution into bins and measure the frequency of correct predictions for each bin. We report the results at t + 10 frames in Figure 4. We observe that all Bayesian approaches outperform the ResG-Mean and CVAE versions. This again demonstrates the effectiveness of the Bayesian approaches in capturing uncertainty.
5 CONCLUSION
We propose a novel approach for predicting real-world semantic segmentations into the future that casts a convolutional deep learning approach into a Bayesian formulation. One of the key contributions is a novel optimization scheme that uses synthetic likelihoods to encourage diversity and deal with multi-modal futures. Our proposed method shows state of the art performance in challenging street scenes. More importantly, we show that the probabilistic output of our deep learning architecture captures uncertainty and multi-modality inherent to this task. Furthermore, we show that the developed methodology goes beyond just street scene anticipation and creates new opportunities to enhance high performance deep learning architectures with principled formulations of Bayesian inference.
APPENDIX A. DETAILED DERIVATIONS.
KL divergence estimate. Here, we provide a detailed derivation of (8). Starting from (7), we have,
KL(q(ω|X,Y) || p(ω|X,Y)) ∝KL(q(ω) || p(ω))− ∫ q(ω) log p(Y|X, ω)dω.
=KL(q(ω) || p(ω))− ∫ q(ω) ( ∫ log p(y|x, ω )d(x, y) ) dω. ( over i.i.d (x, y) ∈ (X,Y) ) =KL(q(ω) || p(ω))− ∫ ( ∫ q(ω) log p(y|x, ω )dω ) d(x, y). (S1)
Multiplying and dividing by p(y|x), the true probability of occurance, =KL(q(ω) || p(ω))− ∫ (∫ q(ω) ( log
p(y|x, ω) p(y|x)
+ log p(y|x) ) dω ) d(x, y). (S2)
Using ∫ q(ω) dω = 1,
=KL(q(ω) || p(ω))− ∫ (∫
q(ω) log p(y|x, ω) p(y|x)
dω + log p(y|x) ) d(x, y).
=KL(q(ω) || p(ω))− ∫ ∫
q(ω) log p(y|x, ω) p(y|x)
dω d(x, y)− ∫ log p(y|x)d(x, y). (S3)
As ∫ log p(y|x)d(x, y) is independent of ω, the variables we are optmizing over, we have,
∝KL(q(ω) || p(ω))− ∫ ∫
q(ω) log p(y|x, ω) p(y|x) dω d(x, y). (S4)
APPENDIX B. RESULTS ON SIMPLE MULTI-MODAL 2D DATA.
We show results on simple multi-modal 2d data as in the motivating example in the introduction. The data consists of two parts: x ∈ [−10, 0] we have y = 0 and x ∈ [0, 10] we have y = (−0.3, 0.3). The set of models under consideration is a two hidden layer neural network with 256-128 neurons with 50% dropout. We show 10 randomly sampled models from ω̂ ∼ q(ω) learned by the Bayes-S approach in Figure 7 and our Bayes-SL approach in Figure 8 (with α = 1, β = 0). We assume constant observation uncertainty (=1). We clearly see that our Bayes-SL learns models which cover both modes, while all the models learned by Bayes-S fit to the mean. Clearly showing that our approach can better capture model uncertainty.
APPENDIX C. ADDITIONAL DETAILS AND EVALUATION ON STREET SCENES.
First, we provide additional training details of our Bayes-WD-SL in Table 5.
Second, we provide additional evaluation on street scenes. In Section 4.2 (Table 1) we use a PSPNet to generate training segmentations for our Bayes-WD-SL model to ensure fair comparison with the state-of-the-art (Seyed et al., 2018). However, the method of Luc et al. (2017) uses a weaker Dialation 10 approach to generate training segmentations. Note that our Bayes-WD-SL model already obtains higher gains in comparison to Luc et al. (2017) with respect the Last Input Baseline, e.g. at +0.54sec, 47.8 - 36.9 = 10.9 mIoU translating to 29.5% gain over the Last Input Baseline of Luc et al. (2017) versus 51.2 - 38.3 = 12.9 mIoU translating to 33.6% gain over the Last Input Baseline of our Bayes-WD-SL model in Table 1. But for fairness, here we additionally include results in Table 6 using the same Dialation 10 approach to generate training segmentations.
We observe that our Bayes-WD-SL model beats the model of Luc et al. (2017) in both short-term (+0.18 sec) and long-term predictions (+0.54 sec). Furthermore, we see that the mean of the Top 5% of the predictions of Bayes-WD-SL leads to much improved results over mean predictions. This again confirms the ability of our Bayes-WD-SL model to capture uncertainty and deal with multi-modal futures.
APPENDIX D. RESULTS ON HKO PRECIPITATION FORECASTING DATA.
The HKO radar echo dataset consists of weather radar intensity images. We use the train/test split used in Xingjian et al. (2015); Bhattacharyya et al. (2018b). Each sequence consists of 20 frames. We use 5 frames as input and 15 for prediction. Each frame is recorded at an interval of 6 minutes. Therefore, they display considerable uncertainty. We use the same network architecture as used for street scene segmentation Bayes-WD-SL (Figure 2 and with α = 5, β = 1), but with half the convolutional filters at each level. We compare to the following baselines: 1. A deterministic model (ResG-Mean), 2. A Bayesian model with weight dropout. We report the (oracle) Top-10% scores (best 1 of 10), over the following metrics (Xingjian et al., 2015; Bhattacharyya et al., 2018b), 1. Rainfall-MSE: Rainfall mean squared error, 2. CSI: Critical success index, 3. FAR: False alarm rate, 4. POD: Probability of detection, and 5. Correlation, in Table 7,
Note, that Xingjian et al. (2015); Bhattacharyya et al. (2018b) reports only scores over mean of all samples. Our ResG-Mean model outperforms these state of the art methods, showing the versatility of our model architecture. Our Bayes-WD-SL can outperform the strong ResG-Mean baseline again showing that it learns to capture uncertainty (see Figure 10). In comparison, the Bayes-WD baseline struggles to outperform the ResG-Mean baseline.
We further compare the calibration our Bayes-SL model to the ResG-Mean model in Figure 9. We plot the predicted intensity to the true mean observed intensity. The difference to ResG-Mean model
is stark in the high intensity region. The RegG-Mean model deviates strongly from the diagonal in this region – it overestimates the radar intensity. In comparison, we see that our Bayes-WD-SL approach stays closer to the diagonal. These results again show that our synthetic likelihood based approach leads to more accurate predictions while not compromising on calibration.
APPENDIX E. ADDITIONAL ARCHITECTURE DETAILS.
Here, we provide layer-wise details of our generative and discriminative models in Table 8 and Table 9. We provide layer-wise details of the recognition network of the CVAE baseline used in Table 4 (in the main paper). Finally, in Table 11 we show the difference in the number of possible
models using our weight based variational distribution 4 (weight dropout) versus the patch based variational distribution (patch dropout) proposed in Gal & Ghahramani (2016a). The number of patches is calculated using the formula,
Input Resolution × # Output Convolutional Filters,
because we use convolutional stride 1, padding to ensure same output resolution and each patch is dropped out (in Gal & Ghahramani (2016a)) independently for each convolutional filter. The number of weight parameters is given by the formula,
Filter size × # Input Convolutional Filters × # Output Convolutional Filters + # Bias.
Table 11 shows that our weight dropout scheme results in significantly lower number of parameters compared to patch dropout Gal & Ghahramani (2016a).
Details of our discriminator model. We show the layer wise details in Table 9.
Details of the recognition model used in the CVAE baseline. We show the layer wise details in Table 10.
Layer Type Size Activation Input Output
In1 Input x, y Conv1,1 Conv1,1 Conv2D 128 ReLU In1 Conv1,2 Conv1,2 Conv2D 128 ReLU Conv1,1 MaxPool1 MaxPool1 Max Pooling 2×2 Conv1,2 Conv2,1 Conv2,1 Conv2D 256 ReLU MaxPool1 Conv2,2 Conv2,2 Conv2D 256 ReLU Conv2,1 MaxPool2 MaxPool2 Max Pooling 2×2 Conv2,2 Conv3,1 Conv3,1 Conv2D 512 ReLU MaxPool2 MaxPool2 MaxPool3 Max Pooling 2×2 Conv3,1 Conv4,1 Conv4,1 Conv2D 512 ReLU MaxPool3 MaxPool4 MaxPool4 Max Pooling 2×2 Conv4,1 Flatten Flatten MaxPool4 Dense1 Dense1 Fully Connected 1024 ReLU Flatten Dense2 Dense2 Fully Connected 1024 ReLU Dense1 Out
Out Fully Connected - Dense2
Table 9: Details our discriminator model. The final output Out provides the synthetic likelihoods
( D(x,ŷ) 1−D(x,ŷ) ) .
Layer Type Size Activation Input Output
In1 Input x, y Conv1,1 Conv1,1 Conv2D 128 ReLU In1 Conv1,2 Conv1,2 Conv2D 128 ReLU Conv1,1 MaxPool1 MaxPool1 Max Pooling 2×2 Conv1,2 Conv2,1 Conv2,1 Conv2D 128 ReLU MaxPool1 Conv2,2 Conv2,2 Conv2D 128 ReLU Conv2,1 MaxPool2 MaxPool2 Max Pooling 2×2 Conv2,2 Conv3,1 Conv3,1 Conv2D 128 ReLU MaxPool2 Conv4,1 Conv4,1 Conv2D 128 ReLU Conv3,1 UpSamp1 UpSamp1 Up Sampling 2×2 Conv4,1 Conv5,1 Conv5,1 Conv2D 128 ReLU UpSamp1 UpSamp2 UpSamp2 Up Sampling 2×2 Conv3,2 Conv4,1 Conv6,1 Conv2D 32 UpSamp2 z1 Conv6,2 Conv2D 32 UpSamp2 z2 Conv6,3 Conv2D 32 UpSamp2 z3
Table 10: Details the recognition model used in the CVAE baseline. The final outputs are the Gaussian Noise tensors z1, z2, z3. | 1. What is the focus of the paper in terms of applications?
2. What are the proposed modifications to the traditional approach?
3. How do the modifications improve the results in practice?
4. Can you provide examples of other state-of-the-art methods that the authors claim their method outperforms?
5. Are there any minor comments or suggestions you have regarding the presentation or writing style of the paper? | Review | Review
The paper presents an application of Bayesian neural networks in predicting
future street scenes. The inference is done by using variational approximation
to the posterior. Moreover, the authors propose to using a synthetic (approximate)
likelihood and the optimization step in variational approxiation is based on a regularization.
These modifications are claimed by the authors that it yields a better results in practice
(more stable, capture the multi-modal nature). Numerical parts in the paper support
the authors' claims: their method outperforms some other state-of-the-art methods.
The presentation is not too hard to follow.
I think this is a nice applied piece, although I have never worked on this applied side.
Minor comment:
In the second sentence, in Section 3.1, page 3,
$f: x \mapsto y$ NOT $f: x \rightarrow y$.
We use the "\rightarrow" for spaces X,Y not for variables. |
ICLR | Title
Oracle-oriented Robustness: Robust Image Model Evaluation with Pretrained Models as Surrogate Oracle
Abstract
Machine learning has demonstrated remarkable performances over finite datasets, yet whether the scores over the fixed benchmarks can sufficiently indicate the model’s performances in the real world is still in discussion. In reality, an ideal robust model will probably behave similarly to the oracle (e.g., the human users), thus a good evaluation protocol is probably to evaluate the models’ behaviors in comparison to the oracle. In this paper, we introduce a new robustness measurement that directly measures the image classification model’s performance compared with a surrogate oracle. Besides, we design a simple method that can accomplish the evaluation beyond the scope of the benchmarks. Our method extends the image datasets with new samples that are sufficiently perturbed to be distinct from the ones in the original sets, but are still bounded within the same causal structure the original test image represents, constrained by a surrogate oracle model pretrained with a large amount of samples. As a result, our new method will offer us a new way to evaluate the models’ robustness performances, free of limitations of fixed benchmarks or constrained perturbations, although scoped by the power of the oracle. In addition to the evaluation results, we also leverage our generated data to understand the behaviors of the model and our new evaluation strategies.
N/A
Machine learning has demonstrated remarkable performances over finite datasets, yet whether the scores over the fixed benchmarks can sufficiently indicate the model’s performances in the real world is still in discussion. In reality, an ideal robust model will probably behave similarly to the oracle (e.g., the human users), thus a good evaluation protocol is probably to evaluate the models’ behaviors in comparison to the oracle. In this paper, we introduce a new robustness measurement that directly measures the image classification model’s performance compared with a surrogate oracle. Besides, we design a simple method that can accomplish the evaluation beyond the scope of the benchmarks. Our method extends the image datasets with new samples that are sufficiently perturbed to be distinct from the ones in the original sets, but are still bounded within the same causal structure the original test image represents, constrained by a surrogate oracle model pretrained with a large amount of samples. As a result, our new method will offer us a new way to evaluate the models’ robustness performances, free of limitations of fixed benchmarks or constrained perturbations, although scoped by the power of the oracle. In addition to the evaluation results, we also leverage our generated data to understand the behaviors of the model and our new evaluation strategies.
1 INTRODUCTION
Machine learning has achieved remarkable performance over various benchmarks. For example, the recent successes of multiple pretrained models (Bommasani et al., 2021; Radford et al., 2021), with the power gained through billions of parameters and samples from the entire internet, has demonstrated human-parallel performance in understanding natural languages (Brown et al., 2020) or even arguably human-surpassing performance in understanding the connections between languages and images (Radford et al., 2021). Even within the scope of fixed benchmarks, machine learning has showed strong numerical evidence that the prediction accuracy over specific tasks can reach the position of the leaderboard as high as a human (Krizhevsky et al., 2012; He et al., 2015; Nangia & Bowman, 2019), suggesting multiple application scenarios of these methods.
However, these methods deployed in the real world often underdeliver its promises made through the benchmark datasets (Edwards, 2019; D’Amour et al., 2020), usually due to the fact that these benchmark datasets, typically i.i.d, cannot sufficiently represent the diversity of the samples a model will encounter after being deployed in practice.
Fortunately, multiple lines of study have aimed to embrace this challenge, and most of these works are proposing to further diversify the datasets used at the evaluation time. We notice these works mostly fall into two main categories: (1) the works that study the performances over testing datasets generated by predefined perturbation over the original i.i.d datasets, such as adversarial robustness (Szegedy et al., 2013; Goodfellow et al., 2015) or robustness against certain noises (Geirhos et al., 2019; Hendrycks & Dietterich, 2019; Wang et al., 2020b); and (2) the works that study the performances over testing datasets that are collected anew with a procedure/distribution different from the one for training sets, such as domain adaptation (Ben-David et al., 2007; 2010) and domain generalization (Muandet et al., 2013).
Both of these lines, while pushing the study of robustness evaluation further, mostly have their own advantages and limitations as a tradeoff on how to guarantee the underlying causal structure of evaluation samples will be the same as the training samples: perturbation based evaluations usually maintain the causal structure by predefining the perturbations to be within a set of operations that will not alter the image semantics when applied, such as ℓ-norm ball constraints (Carlini et al., 2019), or texture (Geirhos et al., 2019), frequency-based (Wang et al., 2020b) perturbations; on the other hand, new-dataset based evaluations can maintain the causal structure by soliciting the efforts to human annotators to construct datasets with the same semantics, but significantly different styles (Hendrycks et al., 2021b; Hendrycks & Dietterich, 2019; Wang et al., 2019; Gulrajani & Lopez-Paz, 2020; Koh et al., 2021; Ye et al., 2021). More details of these lines and their advantages and limitations and how our proposed evaluation protocol will contrast them will be discussed in the next section.
In this paper, we investigate how to diversify the robustness evaluation datasets to make the evaluation results credible and representative. As shown in Figure 1, we aim to integrate the advantages of the above two directions by introducing a new protocol to generate evaluation datasets that can automatically perturb the samples to be sufficiently different from existing test samples, while maintaining the underlying unknown causal structure with respect to an oracle (we use a CLIP model in this paper). Based on the new evaluation protocol, we introduce a new robustness measurement that directly measures the robustness compared with the oracle. With our proposed evaluation protocol and metric, we give a study of current robust machine learning techniques to identify the robustness gap between existing models and the oracle. This is particularly important if the goal of a research direction is to produce models that function reliably to have performance comparable to the oracle.
Therefore, our contributions in this paper are three-fold:
• We introduce a new robustness measurement that directly measures the robustness gap between models and the oracle.
• We introduce a new evaluation protocol to generate evaluation datasets that can automatically perturb the samples to be sufficiently different from existing test samples, while maintaining the underlying unknown causal structure.
• We leverage our evaluation metric and protocol to offer a study of current robustness research to identify the robustness gap between existing models and the oracle. Our findings further bring us understandings and conjectures of the behaviors of the deep learning models.
2 BACKGROUND
2.1 CURRENT ROBUSTNESS EVALUATION PROTOCOLS
The evaluation of machine learning models in non-i.i.d scenario have been studied for more than a decade, and one of the pioneers is probably domain adaptation (Ben-David et al., 2010). In
domain adaptation, the community trains the model over data from one distribution and test the model with samples from a different distribution; in domain generalization (Muandet et al., 2013), the community trains the model over data from several related distributions and test the model with samples from yet another distribution. To be more specific, a popular benchmark dataset used in domain generalization study is the PACS dataset (Li et al., 2017), which consists the images from seven labels and four different domains (photo, art, cartoon, and sketch), and the community studies the empirical performance of models when trained over three of the domains and tested over the remaining one. To facilitate the development of cross-domain robust image classification, the community has introduced several benchmarks, such as PACS (Li et al., 2017), ImageNet-A (Hendrycks et al., 2021b), ImageNet-C (Hendrycks & Dietterich, 2019), ImageNet-Sketch (Wang et al., 2019), and collective benchmarks integrating multiple datasets such as DomainBed (Gulrajani & Lopez-Paz, 2020), WILDS (Koh et al., 2021), and OOD Bench (Ye et al., 2021).
While these datasets clearly maintain the underlying causal structure of the images, a potential issue is that these evaluation datasets are fixed once collected. Thus, if the community relies on these fixed benchmarks repeatedly to rank methods, eventually the selected best method may not be a true reflection of the world, but a model that can fit certain datasets exceptionally well. This phenomenon has been discussed by several textbooks (Duda et al., 1973; Friedman et al., 2001). While recent efforts in evaluating collections of datasets (Gulrajani & Lopez-Paz, 2020; Koh et al., 2021; Ye et al., 2021) might alleviate the above potential hazards of “model selection with test set”, a dynamic process of generating evaluation datasets will certainly further mitigate this issue.
On the other hand, one can also test the robustness of models by dynamically perturbing the existing datasets. For example, one can test the model’s robustness against rotation (Marcos et al., 2016), texture (Geirhos et al., 2019), frequency-perturbed datasets (Wang et al., 2020b), or adversarial attacks (e.g., ℓp-norm constraint perturbations) (Szegedy et al., 2013). While these tests do not require additionally collected samples, these tests typically limit the perturbations to be relatively well-defined (e.g., a texture-perturbed cat image still depicts a cat because the shape of the cat is preserved during the perturbation).
While this perturbation test strategy leads to datasets dynamically generated along the evaluation, it is usually limited by the variations of the perturbations allowed. For example, one may not be able to use some significant distortion of the images in case the object depicted may be deformed and the underlying causal structure of the images are distorted. More generally speaking, most of the current perturbation-based test protocols are scoped by the tradeoff that a minor perturbation might not introduce enough variations to the existing datasets, while a significant perturbation will potentially destroy the underlying causal structures.
2.2 ASSUMED DESIDERATA OF ROBUSTNESS EVALUATION PROTOCOL
As a reflection of the previous discussion, we attempt of offer a summary list of three desired properties of the datasets serving as the benchmarks for robustness evaluation:
• Stableness in Causal Structure: the most important property of the evaluation datasets is that the samples must represent the same underlying causal structure as the one in the training samples.
• Diversity in Generated Samples: for any other non-causal factors of the data, the test samples should cover as many as possible scenarios of the images, such as texture, styles etc.
• A Dynamic Generation Process: to mitigate selection bias of the models over techniques that focus too attentively to the specification of datasets, ideally, the evaluation protocol should consist of a dynamic set of samples, preferably generated with the tested model in consideration.
Key Contribution: To the best of our knowledge, there are no other evaluation protocols of model robustness that can meet the above three properties simultaneously. Thus, we aim to introduce a method that can evaluate model’s robustness that fulfill the three above desiderata at the same time.
2.3 NECESSITY OF NEW ROBUSTNESS MEASUREMENT IN DYNAMIC EVALUATION PROTOCOL
In previous experiments, we always have two evaluation settings: the “standard” test set, and the perturbed test set. When comparing the robustness of two models, prior arts would be to rank the models by their accuracy under perturbed test set (Geirhos et al., 2019; Hendrycks et al., 2021a;
Orhan, 2019; Xie et al., 2020; Zhang, 2019) or other quantities distinct from accuracy, e.g., inception score (Salimans et al., 2016), effective robustness (Taori et al., 2020) and relative robustness (Taori et al., 2020). These metrics are good starting points for experiments since they are precisely defined and easy to apply to evaluate robustness interventions. In the dynamic evaluation protocols, however, these quantities alone cannot provide a comprehensive measure of robustness, as two models are tested on two different “dynamical” test sets. When one model outperforms the other, we cannot distinguish whether one model is actually better than the other, or if the test set happened to be easier.
The core issue in the preceding example is that we can not find the consistent robustness measurement between two different test sets. In reality, an ideal robust model will probably behave similarly to the oracle (e.g., the human users). Thus, instead of indirectly comparing models’ robustness with each other, a measurement that directly measures models’ robustness compared with the oracle is desired.
3 METHOD - COUNTERFACTUAL GENERATION WITH SURROGATE ORACLE
3.1 METHOD OVERVIEW
We use (x,y) to denote an image sample and its corresponding label, use θ(x) to denote the model we aim to evaluate, which takes an input of the image and predicts the label.
Algorithm 1 Counterfactual Image Generation with Surrogate Oracle
Input: (X,Y), θ, g, h, total number of iterations B Output: generated dataset (X̂,Y) for each (x,y) in (X,Y) do
generate x̂0 = g(x,b0) if h(x̂0) = y then
set x̂ = x̂0 for iteration bt < B do
generate x̂t = g(x̂t−1,bt) if h(x̂t) = y then
set x̂ = x̂t else
set x̂ = x̂t−1 exit FOR loop
end if end for
else set x̂ = x end if use (x̂,y) to construct (X̂,Y)
end for
We use g(x,b) to denote an image generation system, which takes an input of the starting image x to generate the another image x̂ within the computation budget b. The generation process is performed as an optimization process to maximize a scoring function α(x̂, z) that evaluate the alignment between the generated image and generation goal z guiding the perturbation process. The higher the score is, the better the alignment is. Thus, the image generation process is formalized as
x̂ = argmax x̂=g(x,b),b<B α(g(x,b), z),
where B denotes the allowed computation budget for one sample. This budget will constrain the generated image not far from the starting image so that the generated one does not converge to a trivial solution that maximizes the scoring the function.
In addition, we choose the model classification loss l(θ(x̂),y) as z. Therefore, the scoring function essentially maximizes the loss of a given image in the direction of a different class.
Finally, to maintain the unknown causal structure of the images, we leverage the power of the pretrained giant models to scope the generation process: the generated images must be considered within the same class by the pretrained model, denoted as h(x̂), which takes in the input of the image and makes a prediction.
Connecting all the components above, the generation process will aim to optimize the following:
x̂ = argmax x̂=g(x,b),b<B,z=l(θ(x̂),y) α(g(x,b), z), subject to h(x̂) = y.
Our method is generic and agnostic to the choices of the three major components, namely θ, g, and h. For example, the g component can vary from something as simple as basic transformations adding noises or rotating images to a sophisticated method to transfer the style of the images; on the other hand, the h component can vary from an approach with high reliability and low efficiency such as
actually outsourcing the annotation process to human labors to the other polarity of simply assuming a large-scale pretrained model can function plausibly as a human.
In the next part, we will introduce our concrete choices of g and h leading to the later empirical results, which build upon the recent advances of vision research.
3.2 ENGINEERING SPECIFICATION
We use VQGAN (Esser et al., 2021) as the image generation system g(x,b), and the g(x,b) is boosted by the evaluated model θ(x) serving as the α(x̂, z) to guide the generation process, where z = l(θ(x̂),y) is the model classification loss on current perturbed images.
The generation is an iterative process guided by the scoring function: at each iteration, the system add more style-wise transformations to the result of the previous iteration. Therefore, the total number of iterations allowed is denoted as the budget B (see Section 4.5 and Appendix H for details of finding the best perturbation). In practice, the value of budget B is set based on the resource concerns.
To guarantee the causal structure of images, we use a CLIP (Radford et al., 2021) model to serve as h, and design the text fragment input of CLIP to be “an image of {class}”. We directly optimize VQGAN encoder space which guided by our scoring function. We show the algorithm in Algorithm 1.
3.2.1 SPARSE SUBMODEL OF VQGAN FOR EFFICIENT PERTURBATION
While our method will function properly as described above, we notice that the generation process still have a potential limitation: the bound-free perturbation of VQGAN will sometimes perturb the semantics of the images, generating results that will be rejected by the oracle later and thus leading to a waste of computational efforts.
To counter this challenge, we use a sparse variable selection method to analyze the embedding dimensions of VQGAN to identify a subset of dimensions that are mainly responsible for the non-semantic variations.
In particular, with a dataset (X,Y) of n samples, we first use VQGAN to generate a style-transferred dataset (X′,Y). During the generation process, we preserve the latent representations of input samples after the VQGAN encoder in the original dataset. We also preserve the final latent representations before the VQGAN decoder that are quantized after the iterations in the style-transferred dataset. Then, we create a new dataset (E,L) of 2n samples, for each sample (e, l) ∈ (E,L), e is the latent representation for the sample (from either the original dataset or the style-transferred one), and l is labelled as 0 if the sample is from the original dataset and 1 if the style-transferred dataset.
Then, we train ℓ1 regularized logistic regression model to classify the samples of (E,L). With w denoting the weights of the model, we solve the following problem
argmin w ∑ (e,l)∈(E,L) l(ew, l) + λ∥w∥1,
and the sparse pattern (zeros or not) of w will inform us about which dimensions are for the style.
3.3 MEASURING ROBUSTNESS
Oracle-oriented Robustness (OOR). By design, the causal structures of counterfactual images will be maintained by the oracle. Thus, if a model has a smaller accuracy drop on the counterfactual images, it means that the model makes more similar predictions to oracle compared to a different model. To precisely define OOR, we introduce counterfactual accuracy (CA), the accuracy on the counterfactual images that our generative model successfully produces. As SA may influence CA to some extent, to disentangle CA from SA, we normalize CA with SA as OOR:
OOR = CA
SA ×100%
In settings where the oracle is human labors, OOR measures the robustness difference between the evaluated model and human perception. In our experiment setting, OOR measures the robustness difference between models trained on fixed datasets (the evaluated model) and the model trained on unfiltered, highly varied, and highly noisy data (the oracle CLIP model).
3.4 THE NECESSITY OF THE SURROGATE ORACLE
At last, we devote a short paragraph to reminder some readers that, despite the alluring idea of designing systems that forgo the usages of underlying causal structure or oracle, it has been proved or argued multiple times that it is impossible to create that knowledge with nothing but data, in either context of machine learning (Locatello et al., 2019; Mahajan et al., 2019; Wang et al., 2021) or causality (Bareinboim et al., 2020; Xia et al., 2021),(Pearl, 2009, Sec. 1.4).
4 EXPERIMENTS - EVALUATION AND UNDERSTANDING OF MODELS
4.1 EXPERIMENT SETUP
We consider four different scenarios, ranging from the basic benchmark MNIST (LeCun et al., 1998), through CIFAR10 (Krizhevsky et al., 2009), 9-class ImageNet (Santurkar et al., 2019), to full-fledged 1000-class ImageNet (Deng et al., 2009). For ImageNet, we resize all images to 224× 224 px. We also center and re-scale the color values with µRGB = [0.485, 0.456, 0.406] and σ = [0.229, 0.224, 0.225]. The total number of iterations allowed (computation budget B) of our evaluation protocol is set as 10. We conduct the experiments on a NVIDIA GeForce RTX 3090 GPU.
For each of the experiment, we report a set of four results:
• Standard Accuracy (SA): reported for references.
• Validation Rate (VR): the percentage of images validated by the oracle that maintains the causal structure.
• Oracle-oriented Robustness (OOR): the robustness of the model compared with the oracle.
4.2 ROBUSTNESS EVALUATION FOR STANDARD VISION MODELS
We consider a large range of models (Appedix J) and evaluate pre-trained variants of a LeNet architecture (LeCun et al., 1998) for the MNIST experiment and ResNet architecture (He et al., 2016a) for the remaining experiments. For ImageNet experiment, we also consider pretrained transformer variants of ViT (Dosovitskiy et al., 2020), Swin (Liu et al., 2021), Twins (Chu et al., 2021), Visformer (Chen et al., 2021) and DeiT (Touvron et al., 2021) from the timm library (Wightman, 2019). We evaluate the most recent ConvNeXt (Liu et al., 2022) as well. All models are trained on the ILSVRC2012 subset of IN comprised of 1.2 million images in the training and a total of 1000 classes (Deng et al., 2009; Russakovsky et al., 2015).
We report our results in Table 1. As expected, these models can barely maintain its performances when tested on data from different distributions, as shown by many previous works (e.g., Geirhos et al., 2019; Hendrycks & Dietterich, 2019; Wang et al., 2020b).
Interestingly, on ImageNet, though both transformer-variants models and vanilla CNNarchitecture model, i.e., ResNet, attain similar clean image accuracy, transformer-variants substantially outperforms ResNet50 in terms of OOR under our dynamic evaluation protocol. We conjecture such performance gap is partly originated from the differences in training setups; more specifically, it may be resulted by the fact transformer-variants by default use strong data augmentation strategies while ResNet50 use none of them. The augmentation strategies (e.g., Mixup (Zhang et al., 2017), Cutmix (Yun
et al., 2019) and Random Erasing (Zhong et al., 2020), etc.) already naively introduce out-ofdistribution (OOD) samples during training, therefore are potentially helpful for securing model
robustness towards data shifts. When equiping with the similar data augmentation strategies, CNNarchitecture model, i.e., ConvNext, has achieved comparable performance in terms of OOR. This hypothesis has also been verified in recent works (Bai et al., 2021; Wang et al., 2022). We will offer more discussions on the robustness enhancing methods in Section 4.3.
Besides comparing performance between different standard models, OOR brings us the chance to directly compare models with the oracle. Across all of our experiments, the OOR shows the significant gap between models and the oracle, which is trained on the unfiltered and highly varied data, seemingly suggesting that training with a more diverse dataset would help with robustness. This overarching trend has also been identified in (Taori et al., 2020). However, quantifying when and why training with more data helps is still an interesting open question.
We also notice that the VR tends to be different for different datasets. We conjecture this is due to how the oracle model understands the images and labels, more discussions is offered in Section 5.
4.3 ROBUSTNESS EVALUATION FOR ROBUST VISION MODELS
Recently, some techniques have been introduced to cope with corruptions or style shifts. For example, by adapting the batch normalization statistics with a limited number of samples (Schneider et al., 2020), the performance on stylized images (or corrupted images) can be significantly increased. Additionally, some more sophisticated techniques, e.g., AugMix (Hendrycks et al., 2019), have also been widely employed by the community.
To investigate whether those OOD robust models can still maintain the performance under our dynamic evaluation protocol, we evaluate the pretrained ResNet50 models combining with the four leading methods from the ImageNet-C leaderboard, namely Stylized ImageNet training (SIN; (Geirhos et al., 2019)), adversarial noise training (ANT; (Rusak et al.)) as well as a combination of ANT and SIN (ANT+SIN; (Rusak et al.)), optimized data augmentation using Augmix (AugMix; (Hendrycks et al., 2019)), DeepAugment (DeepAug; (Hendrycks et al., 2021a)) and a combination of Augmix and DeepAugment (DeepAug+AM; (Hendrycks et al., 2021a)).
The results are displayed in Table 2. Surprisingly, we find that some common corruption robust models, i.e., SIN, ANT, ANT+SIN, fail to maintain their power under our dynamic evaluation protocol. We take the SIN method as an example. The OOR of SIN method is 42.92, which is even lower than that of a vanilla ResNet50. As these methods are well fitted in the benchmark ImageNet-C, such results verify the weakness of relying on fixed benchmarks to rank methods. The selected best method may not be a true reflection of the real world, but a model well fit certain datasets, which in turn proves the necessity of our dynamic evaluation protocol.
DeepAug, Augmix and DeepAug+AM perform better than SIN and ANT methods in terms of OOR as they are dynamically perturbing the datasets, which alleviates the hazards of “model selection with test set” to some extent. However, their performance is limited by the variations of the perturbations allowed, resulting in only a marginal improvement compared with the ResNet50 under our evaluation protocol.
In addition, we also visualize the counterfactual images generated according to the evaluated styleshift robust models in Figure 2. More results are shown in Appendix L. Specifically, we have the following observations:
Preservation of local textual details. A number of recent empirical findings point to an important role of object textures for CNN, where object textures are more important than global object shapes for CNN model to learn (Gatys et al., 2015; Ballester & Araujo, 2016; Gatys et al., 2017; Brendel & Bethge, 2019; Geirhos et al., 2019; Wang et al., 2020b). We notice our generated counterfactual images may preserve false local textual details, the evaluation task will become much harder since textures are no longer predictive, but instead a nuisance factor (as desired). For the counterfactual
image generated for the DeepAug method (Figure 2f), we produce a skin texture similar to chicken skin, and the fish head becomes more and more chicken-like. ResNet with DeepAug method is misled by this corruption.
Generalization to shape perturbations. Moreover, since our attack intensity could be dynamically altered based on the model’s gradient while still maintaining the causal structures, the perturbation we produce would be sufficiently that not just limited to object textures, but even be a certain degree of shape perturbation. As it is acknowledged that networks with a higher shape bias are inherently more robust to many different image distortions and reach higher performance on classification and classification tasks, we observe that the counterfactual image generated for SIN (Figure 2b and Figure 2i) and ANT+SIN (Figure 2d and Figure 2k) methods are shape-perturbed and successfully attack the models.
Recognition of model properties. With the combination of different methods, the counterfactual images generated would be more comprehensive. For example, the counterfactual image generated for DeepAug+AM (Figure 2g) would preserve the chicken-like head of DeepAug’s and skin patterns of Augmix’s. As our evaluation method does not memorize the model it evaluated, this result reveals that our method could recognize the model properties, and automatically generate those hard counterfactual images to complete the evaluation.
Overall, these visualizations reveal that our dynamic evaluation protocol dynamically adjusts attack strategies based on different model properties, and automatically generates diversified counterfactual images that complements static benchmark, i.e., ImageNet-C, to expose weaknesses for models.
4.4 UNDERSTANDING THE PROPERTIES OF OUR EVALUATION SYSTEM
We continue to investigate several properties of the models in the next couple sections. To save space, we will mainly present the results on CIFAR10 experiment here and save the details to the appendix:
• In Appendix B, we explored the transferability of the generated images. The results of a reasonable transferability suggests that our method of generating images can be potentially used in a broader scope: we can also leverage the method to generate a static set of images and set a benchmark dataset to help the development of robustness methods.
• In Appendix C, we test whether initiating the perturbation process with an adversarial example will further degrade the OOR. We find that initiaing with the FGSM adversarial examples (Goodfellow et al., 2015) barely affect the OOR.
• In Appendix D, we compare the vanilla model to a model trained by PGD (Madry et al., 2017). We find that the adversarially trained model and vanillaly trained model process the data differently. However, their robustness weak spots are exposed to a similar degree by our test system.
• In Appendix E, we explored the possibility of improving the evaluated robustness by augmenting the images with the images generated by our evaluation system. However, due to the required
computational load, we only use a static set of generated images to train the model and the results suggest that static set of images for augmentation cannot sufficiently robustify the model to our evaluation system.
• We also notice that the generated images tend to shift the color of the original images, so we tested the robustness of grayscale models in Appendix F, the results suggest removing the color channel will not improve robustness performances.
4.5 EXPERIMENTS REGARDING METHOD CONFIGURATION
Generator Configuration. We conduct ablation study on the generator choice to agree on the performance ranking in Table 1 and Table 2. We consider several image generator architechitures, namely, variational autoencoder (VAE) (Kingma & Welling, 2013; Rezende et al., 2014) like Efficient-VDVAE (Hazami et al., 2022), diffusion models (Sohl-Dickstein et al., 2015) like Improved DDPM (Nichol & Dhariwal, 2021) and ADM (Dhariwal & Nichol, 2021), and GAN like StyleGANXL (Sauer et al., 2022). As shown in Table 3, the validation rate of the oracle stays stable across all the image generators. We find that the conclusion is consistent under different generator choices, which validates the correctness of our conclusions in Section 4.2 and Section 4.3.
Sparse VQGAN. In experiments of sparse VQGAN, we find that only 0.69% dimensions are highly correlated to the style. Therefore, we mask the rest 99.31% dimensions to create a sparse submodel of VQGAN for efficient perturbation. The running time can be reduced by 12.7% on 9-class ImageNet and 28.5% on ImageNet, respectively. Details can be found in Appendix G.
Step size. We experiment on the perturbation step size to find the best perturbation under the computation budget B. We find that too small or large step size lead to slight perturbation strength while stronger image perturbation could be generated when the step size stays in a mild range, i.e., 0.1 and 0.2. Details of our experiments on step size can be found in Appendix H.
5 DISCUSSION AND CONCLUSION
Potential limitation. We notice that, the CLIP model has been influenced by the imbalance sample distributions across the internet. We provide the details of test on 9-class ImageNet for vanilla ResNet-18 in Appendix I. We observe that the oracle model can tolerate a much more significant perturbation over samples labelled as Dog (VR 0.95) or Cat (VR 0.94) than samples labelled as Primate (VR 0.48). The OOR value for Primate images are much higher than other categories, creating an illusion that the evaluated models are robust against perturbed Primate images. However, such an illusion is caused by the limitation of the pretrained models that the oracle could only handle slightly perturbed samples.
The usage of oracle. Is it cheating to use oracle? The answer might depend on perspectives, but we hope to remind some readers that, in general, it is impossible to maintain the underlying causal structure during perturbation without prior knowledge (Locatello et al., 2019; Mahajan et al., 2019; Wang et al., 2021; Bareinboim et al., 2020; Xia et al., 2021),(Pearl, 2009, Sec. 1.4).
Conclusion. To conclude, in this paper, we first summarized the common practices of model evaluation strategies for robust vision machine learning. We then discussed three desiderata of the robustness evaluation protocol. Further, we offered a simple method that can fulfill these three desiderata at the same time, serving the purpose of evaluating vision models’ robustness across generic i.i.d benchmarks, without requirement on the prior knowledge of the underlying causal structure depicted by the images, although relying on a plausible oracle.
ETHICS STATEMENT
The primary goal of this paper is to introduce a new evaluation protocol for vision machine learning research that can generate sufficiently perturbed samples from the original samples while maintaining the causal structures by assuming an oracle. Thus, we can introduce significant variations of the existing data while being free from additional human efforts. With our approach, we hope to renew the benchmarks for current robustness evaluation, offer understandings of the behaviors of deep vision models and potentially facilitate the generation of more truly robust models. Increasing the robustness of vision models can enhance their reliability and safety, which leads to the trustworthy artificial intelligence and contributes to a wide range of application scenarios (e.g., manufacturing automation, surveillance systems, etc.). Manufacturing automation can improve the production efficiency, but may also trigger social issues related to job looses and industrial restructuring. Advanced surveillance systems are conducive to improving social security, but may also raise public concerns about personal privacy violations.
We encourage further work to understand the limitations of machine vision models in OOD settings. More robust models carry the potential risk of automation bias, i.e., an undue trust in vision models. However, even if models are robust against corruptions in finite OOD datasets, they might still quickly fail on the massive generic perturbations existing in the real-world data space, i.e., the perturbations offered by our approach. Understanding under what conditions model decisions can be deemed reliable or not is still an open research question that deserves further attention.
REPRODUCIBILITY STATEMENT
Please refer to Appendix J for the references of all models we evaluated and links to the corresponding source code.
A NOTES ON THE EXPERIMENTAL SETUP
A.1 NOTES ON MODELS
Note that we only re-evaluate existing model checkpoints, and hence do not perform any hyperparameter tuning for evaluated models. Since it is possible to work with a small amount of GPU resources, our model evaluations are done on a single NVIDIA GeForce RTX 3090 GPU.
A.2 HYPERPARAMETER TUNING
Our method is generally parameter-free except for the computation budget and perturbation step size. In our experiments, the computation budget is the maximum iteration number of Sparse VQGAN. We consider the predefined value to be 10, as it guarantees the degree of perturbation with acceptable time costs. We provide the experiment for step size configuration in Section 4.5.
B TRANSFERABILITY OF GENERATED IMAGES
We first study whether our generated images are model specific, since the generation of the images involves the gradient of the original model. We train several architectures, namely EfficientNet (Tan & Le, 2019), MobileNet (Howard et al., 2017), SimpleDLA (Yu et al., 2018), VGG19 (Simonyan & Zisserman, 2014), PreActResNet (He et al., 2016b), GoogLeNet (Szegedy et al., 2015), and DenseNet121 (Huang et al., 2017) and test these models with the images. We also train another ResNet following the same procedure to check the transferability across different runs in one architecture.
Table 4: Performances of transferability.
Model SA OOR ResNet 95.38 54.17 EfficientNet 91.37 68.48 MobileNet 91.63 68.72 SimpleDLA 92.25 66.16 VGG 93.54 70.57 PreActResNet 94.06 67.25 ResNet 94.67 66.23
GoogLeNet 95.06 66.68 DenseNet 95.26 66.43
Table 4 shows a reasonable transferability of the generated images as the OOR are all lower than the SA, although we can also observe an improvement over the OOR when tested in the new models. These results suggest that our method of generating images can be potentially used in a broader scope: we can also leverage the method to generate a static set of images and set a benchmark dataset to help the development of robustness methods.
In addition, our results might potentially help mitigate a debate on whether more accurate architectures are naturally more robust: on one hand, we have results showing that more accurate architectures indeed lead to better empirical performances on certain (usually fixed) robustness benchmarks (Rozsa et al., 2016; Hendrycks & Dietterich, 2019); while on the other hand, some counterpoints suggest the higher robustness numerical performances are only because these models capture more non-robust features that also happen exist in the fixed benchmarks (Tsipras et al., 2018; Wang et al., 2020b; Taori et al., 2020). Table 4 show some examples to support the latter argument: in particular, we notice that VGG, while ranked in the middle of the accuracy ladder, interestingly stands out when tested with generated images. These results continue to support our argument that a dynamic robustness test scenario can help reveal more properties of the model.
C INITIATING WITH ADVERSARIAL ATTACKED IMAGES
Since our method using the gradient of the evaluated model reminds readers about the gradient-based attack methods in adversarial robustness literature, we test whether initiating the perturbation process with an adversarial example will further degrade the accuracy.
We first generate the images with FGSM attack (Goodfellow et al., 2015). Table 5. shows that initiating with the FGSM adversarial examples barely affect the OOR, which
is probably because the major style-wise perturbation will erase the imperceptible perturbations the adversarial examples introduce.
D ADVERSARIALLY ROBUST MODELS
With evidence suggesting the adversarially robust models are considered more human perceptually aligned (Engstrom et al., 2019; Zhang & Zhu, 2019; Wang et al., 2020b), we compare the vanilla model to a model trained by PGD (Madry et al., 2017) (ℓ∞ norm smaller than 0.03).
Table 6: Performances comparison with vanilla model and PGD trained model.
Data Model SA OOR Van. Van. 95.38 57.79PGD 85.70 95.96 PGD Van. 95.38 81.73PGD 85.70 66.18
As shown in Table 6, adversarially trained model and vanillaly trained model indeed process the data differently: the transferability of the generated images between these two regimes can barely hold. In particular, the PGD model can almost maintain its performances when tested with the images generated by the vanilla model.
However, despite the differences, the PGD model’s robustness weak spots are exposed to a similar degree with the vanilla model by our test system: the OOR of the vanilla model and the PGD model are only 57.79 and 66.18, respectively. We believe this result can further help advocate our belief that the robustness test needs to be a dynamic process generating images conditioning on the model to test, and thus further help validate the importance of our contribution.
E AUGMENTATION THROUGH STATIC ADVERSARIAL TRAINING
Intuitively, inspired by the success of adversarial training (Madry et al., 2017) in defending models against adversarial attacks, a natural method to improve the empirical performances under our new test protocol is to augment the training data with counterfactual training images generated by the same process. We aim to validate the effectiveness of this method here.
However, the computational load of generation process is not ideal to serve the standard adversarial training strategy, and we can only have one copy of the counterfactual training samples. Fortunately, we notice that some recent advances in training with data augmentation can help learn robust representations with a limited scope of augmented samples (Wang et al., 2020a), which we use here.
We report our results in Table 7. The first thing we observe is that the model trained with the augmentation data offered through our approach could preserve a relatively higher performance (OOR 89.10) when testing with the counterfactual images generated according to the vanilla model. Since we have shown the counterfactual samples have a reasonable transferability in the main manuscript, this result indicates the robustness we brought when training with the counterfactual images generated by our approach.
In addition, when tested with the counterfactual images generated according to the augmented model, both models’ performance would drop significantly, which again indicates the effectiveness of our approach.
F GRAYSCALE MODELS
Our previous visualization suggests that a shortcut the counterfactual generation system can take is to significantly shift the color of the images, for which a grey-scale model should easily maintain the performance. Thus, we train a grayscale model by changing the ResNet input channel to be 1 and
transforming the input images to be grayscale upon feeding into the model. We report the results in Table 8.
Interestingly, we notice that the grayscale model cannot defend against the shift introduced by our system by ignoring the color information. On the contrary, it seems to encourage our system to generate more counterfactual images that can lower the performances.
In addition, we visualize some counterfactual images generated according to each model and show them in Figure 3. We can see some evidence that the graycale model forces the generation system to focus more on the shape of the object and less of the color of the images. We find it particularly interesting that our system sometimes generates different images differently for different models while the resulting images deceive the respective model to make the same prediction.
G EXPERIMENTS TO SUPPORT SPARSE VQGAN
We generate the flattened latent representations of input images after the VQGAN Encoder with negative labels. Following Algorithm 1, we generate the flattened final latent representations before the VQGAN decoder with positive labels. Altogether, we form a binary classification dataset where the number of positive and negative samples is balanced. The positive samples are the latent representations of counterfactual images while the negative samples are the latent representations of input images. We set the split ratio of train and test set to be 0.8 : 0.2. We perform the explorations on various datasets, i.e. MNIST, CIFAR-10, 9-class ImageNet and ImageNet.
The classification model we consider is LASSO1 as it enables automatically feature selection with strong interpretability. We set the regularization strength to be 36.36. We adopt saga (Defazio et al., 2014) as the solver to use in the optimization process. The classification results are shown in Table 9.
1Although LASSO is originally a regression model, we probabilize the regression values to get the final classification results.
We observe that the coefficient matrix of features can be far sparser than we expect. We take the result of 9-class ImageNet as an example. Surprisingly, we find that almost 99.31% dimensions in average could be discarded when making judgements. We argue the preserved 0.69% dimensions are highly correlated to VQGAN perturbation. Therefore, we keep the corresponding 99.31% dimensions unchanged and only let the rest 0.69% dimensions participate in computation. Our computation loads could be significantly reduced while still maintain the competitive performance compared with the unmasked version2.
We conduct the run-time experiments on a single NVIDIA GeForce RTX 3090 GPU. Following our experiment setting, we evaluate a vanilla ResNet-18 on 9-class ImageNet and a vanilla ResNet-50 on ImageNet. As shown in Table 10, the run-time on ImageNet can be reduced by 28.5% with our sparse VQGAN. Compared with large-scale masked dimensions (i.e., 99.31%), we attribute the relatively incremental run-time improvement (i.e., 12.7% on 9-class ImageNet, 28.5% on ImageNet) to the fact that we have to perform mask and unmask operations each time when calculating the model gradient, which offsets the calculation efficiency brought by the sparse VQGAN to a certain extent.
H PARAMETER STUDY ON STEP SIZE
We conduct the parameter study of the perturbation step size for our evaluation system on the CIFAR10 dataset. Specifically, we tune the step size in {0.01, 0.05, 0.1, 0,2, 0.5}. The maximum iteration (computation budget B) is set to be 10. All results are produced based on the ResNet18 and averaged over five runs.
As shown in Figure 4, we observe that when the step size is too small, i.e., 0.01 and 0.05, the strength of perturbation cannot be achieved within the predefined maximum iterations, resulting in the higher score of OOR. In addition, large step size will also lead to higher OOR score. When the step size is large, i.e., 0.5, the perturbation is likely to stop after only a few iterations. This could also lead to small perturbation strength compared with the scenario where we use relatively small step size but more iterations. When the step size is 0.01, the model seems achieves the oracle-parallel performance (OOR 99.66). However, such OOR values would become meaningless due to the small perturbation strength. Moreover, when the step size stays in a mild range, i.e., 0.1 and 0.2, stronger image perturbation could be generated, while the performances at this
range stay constant. Therefore, we choose the step size of 0.1 for the experiments.
2We note that the overlapping degree of the preserved dimensions for each dataset is not high, which means that we need to specify these dimensions when facing new datasets.
I ANALYSIS OF SAMPLES THAT ARE MISCLASSIFIED BY THE MODEL
We present the results on 9-class ImageNet experiment to show the details for each category.
Table 11 shows that the VR values for most categories are still higher than 80%, some even reach 95%, which means we produce sufficient number of counterfactual images. However, we notice that the VR value for primate images is quite lower compared with other categories, indicating around 52% perturbed primate images are blocked by the orcle. We have discussed this category unbalance issue in Section 5.
As shown in Table 11, the OOR value for each category significantly drops compared with the SA value, indicating the weakness of trained models. An interesting finding is that the OOR value for Primate images are quite higher than other categories, given the fact that more perturbed Primate images are blocked by the oracle. We attribute it to the limitation of foundation models. As the CLIP model has been influenced by the imbalance sample distributions across the Internet, it could only handle easy perturbed samples well. Therefore, the counterfactual images preserved would be those that can be easily classified by the models.
J LIST OF EVALUATED MODELS
The following lists contains all models we evaluated on various datasets with references and links to the corresponding source code.
J.1 PRETRAINED VQGAN MODEL
We use the checkpoint of vqgan_imagenet_f16_16384 from https://heibox. uni-heidelberg.de/d/a7530b09fed84f80a887/
J.2 PRETRAINED CLIP MODEL
Model weights of ViT-B/32 and usage code are taken from https://github.com/openai/ CLIP
J.3 TIMM MODELS TRAINED ON IMAGENET (WIGHTMAN, 2019)
Weights are taken from https://github.com/rwightman/pytorch-image-models/ tree/master/timm/models
1. ResNet50 (He et al., 2016a) 2. ViT (Dosovitskiy et al., 2020) 3. DeiT (Touvron et al., 2021) 4. Twins (Chu et al., 2021)
5. Visformer (Chen et al., 2021) 6. Swin (Liu et al., 2021) 7. ConvNeXt (Liu et al., 2022)
J.4 ROBUST RESNET50 MODELS
1. ResNet50 SIN+IN (Geirhos et al., 2019) https://github.com/rgeirhos/ texture-vs-shape
2. ResNet50 ANT (Rusak et al.) https://github.com/bethgelab/ game-of-noise
3. ResNet50 ANT+SIN (Rusak et al.) https://github.com/bethgelab/ game-of-noise
4. ResNet50 Augmix (Hendrycks et al., 2019) https://github.com/ google-research/augmix
5. ResNet50 DeepAugment (Hendrycks et al., 2021a) https://github.com/ hendrycks/imagenet-r
6. ResNet50 DeepAugment+Augmix (Hendrycks et al., 2021a) https://github.com/ hendrycks/imagenet-r
J.5 ADDITIONAL IMAGE GENERATORS
1. Efficient-VDVAE (Hazami et al., 2022) https://github.com/Rayhane-mamah/ Efficient-VDVAE
2. Improved DDPM (Nichol & Dhariwal, 2021) https://github.com/open-mmlab/ mmgeneration/tree/master/configs/improved_ddpm
3. ADM (Dhariwal & Nichol, 2021) https://github.com/openai/ guided-diffusion
4. StyleGAN (Sauer et al., 2022) https://github.com/autonomousvision/ stylegan_xl
K LEADERBOARDS FOR ROBUST IMAGE MODEL
We launch leaderboards for robust image models. The goal of these leaderboards are as follows:
• To keep on track of state-of-the-art on each adversarial vision task and new model architectures with our dynamic evaluation process.
• To see the comparison of robust vision models at a glance (e.g., performance, speed, size, etc.). • To access their research papers and implementations on different frameworks.
We offer a sample of the robust ImageNet classification leaderboard in supplementary materials.
L ADDITIONAL COUNTERFACTUAL IMAGE SAMPLES
In Figure 5, we provide additional counterfactual images generated according to each model. We have similar observations to Section 4.3. First, the generated counterfactual images exhibit diversity that many other non-causal factors of the data would be covered, i.e., texture, shape and styles. Second, our method could recognize the model properties, and automatically generate those hard counterfactual images to complete the evaluation.
In addition, the generated images show a reasonable transferability in Table 4, indicating tha our method can be potentially used in a broader scope: we can also leverage the method to generate a static set of images and set a benchmark dataset to help the development of robustness methods. Therefore, we also offer two static benchmarks in supplementary materials that are generated based on CNN architecture, i.e., ConvNext and transformer variant, i.e., ViT, respectively. | 1. What is the focus of the paper regarding image classification models' evaluation protocols?
2. What are the strengths of the proposed approach, particularly in addressing overfitting benchmarks and evaluating robustness?
3. Do you have any concerns or questions regarding the scoring function α in the image generation algorithm?
4. Why do OOR values seem to be >10, and is there a '*100' missing?
5. How does the reviewer assess the diversity of images generated by the method?
6. Is OOR intended to provide a ranking of robust models, and what are the limitations of OOR regarding the threat model being used?
7. How would OOR behave when using different generators but the same oracle?
8. What are the major limitations of the paper, such as the dependence on static benchmarks and the potential impact of the generator and oracle choices?
9. What are the typos or minor issues in the paper's writing that should be addressed? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper introduces an evaluation protocol to study robustness of image classification models. The protocol essentially creates a new test set depending on the model being evaluated and the dataset it was trained on. This, the authors argue, would reduce dependence on static benchmarks and create a “dynamic evaluation protocol”. However, this creates an issue where robustness of models cannot be directly compared since they no longer have the same test set. An oracle (CLIP) is introduced which helps us compare robustness fairly.
The authors provide an algorithm to generate new, ‘perturbed’ images. We use a generic generator model (VQGAN in the paper) to perturb the clean image, then check whether this perturbation has the same label as the original label (using the oracle). This step is repeated several times to stay within a budget.
The above algorithm applied to images in the clean dataset produces perturbed images which act as a test set. This test set depends on the choice of the model being evaluated, oracle and generator. Therefore, it is a general, dynamic way of evaluating robustness. The authors also introduce a new metric called oracle-oriented robustness (OOR) = (Counterfactual acc)/(standard acc). This is used to measure the robustness gap to the oracle.
MNIST, CIFAR-10 and ImageNet are used for experiments with several vanilla image classification models (ResNet etc) and some models with claimed robustness properties (AugMix, DeepAugment etc).
Strengths And Weaknesses
Strengths:
Overfitting benchmarks and evaluating robustness are important problems to the community.
The authors have used a large selection of classification models to test their hypothesis.
Comprehensive ablation studies on the oracle and the generator.
The paper is fairly simple to understand and concepts are explained well.
Weaknesses:
The authors introduce a scoring function
α
in their image generation algorithm which guides the generation process but there is no concrete example given about what it is. I’m assuming this scoring function (at least the one used in the paper) essentially maximizes the loss of a given image in the direction of a different class. This is the same as Santurkar et al. where they use the inner maximization problem to generate images using a classifier.
I was under the impression that OOR values would be <1 since standard accuracy would usually be higher than counterfactual accuracy but all values in the paper seem to be > 10 at least. Is there a ‘*100’ missing?
The authors claim that images generated using their method are diverse. However, no diversity metric are given. Since there is a computational budget, a lack of diversity of images could artificially inflate the robustness of a model.
Is OOR supposed to give a ranking of robust models? I ask because in Table 6 in Appendix D, a vanilla model has higher OOR than PGD (so more robust than PGD). I find this hard to believe. This must mean that OOR specifically depend on the threat model being used. The authors should be more explicit about this limitation.
It is unclear how OOR behaves when you have different generators but the same oracle.
Since we are using the oracle, any gaps in the robustness of the oracle will also show up in the model being evaluated. The authors, to their credit, mention this in Section 5 but it is still a major limitation.
Clarity, Quality, Novelty And Reproducibility
This work is original to my knowledge and there are no major issues with the writing.
Typo: See section 4.4, bullet 3 “robsutness” |
ICLR | Title
Oracle-oriented Robustness: Robust Image Model Evaluation with Pretrained Models as Surrogate Oracle
Abstract
Machine learning has demonstrated remarkable performances over finite datasets, yet whether the scores over the fixed benchmarks can sufficiently indicate the model’s performances in the real world is still in discussion. In reality, an ideal robust model will probably behave similarly to the oracle (e.g., the human users), thus a good evaluation protocol is probably to evaluate the models’ behaviors in comparison to the oracle. In this paper, we introduce a new robustness measurement that directly measures the image classification model’s performance compared with a surrogate oracle. Besides, we design a simple method that can accomplish the evaluation beyond the scope of the benchmarks. Our method extends the image datasets with new samples that are sufficiently perturbed to be distinct from the ones in the original sets, but are still bounded within the same causal structure the original test image represents, constrained by a surrogate oracle model pretrained with a large amount of samples. As a result, our new method will offer us a new way to evaluate the models’ robustness performances, free of limitations of fixed benchmarks or constrained perturbations, although scoped by the power of the oracle. In addition to the evaluation results, we also leverage our generated data to understand the behaviors of the model and our new evaluation strategies.
N/A
Machine learning has demonstrated remarkable performances over finite datasets, yet whether the scores over the fixed benchmarks can sufficiently indicate the model’s performances in the real world is still in discussion. In reality, an ideal robust model will probably behave similarly to the oracle (e.g., the human users), thus a good evaluation protocol is probably to evaluate the models’ behaviors in comparison to the oracle. In this paper, we introduce a new robustness measurement that directly measures the image classification model’s performance compared with a surrogate oracle. Besides, we design a simple method that can accomplish the evaluation beyond the scope of the benchmarks. Our method extends the image datasets with new samples that are sufficiently perturbed to be distinct from the ones in the original sets, but are still bounded within the same causal structure the original test image represents, constrained by a surrogate oracle model pretrained with a large amount of samples. As a result, our new method will offer us a new way to evaluate the models’ robustness performances, free of limitations of fixed benchmarks or constrained perturbations, although scoped by the power of the oracle. In addition to the evaluation results, we also leverage our generated data to understand the behaviors of the model and our new evaluation strategies.
1 INTRODUCTION
Machine learning has achieved remarkable performance over various benchmarks. For example, the recent successes of multiple pretrained models (Bommasani et al., 2021; Radford et al., 2021), with the power gained through billions of parameters and samples from the entire internet, has demonstrated human-parallel performance in understanding natural languages (Brown et al., 2020) or even arguably human-surpassing performance in understanding the connections between languages and images (Radford et al., 2021). Even within the scope of fixed benchmarks, machine learning has showed strong numerical evidence that the prediction accuracy over specific tasks can reach the position of the leaderboard as high as a human (Krizhevsky et al., 2012; He et al., 2015; Nangia & Bowman, 2019), suggesting multiple application scenarios of these methods.
However, these methods deployed in the real world often underdeliver its promises made through the benchmark datasets (Edwards, 2019; D’Amour et al., 2020), usually due to the fact that these benchmark datasets, typically i.i.d, cannot sufficiently represent the diversity of the samples a model will encounter after being deployed in practice.
Fortunately, multiple lines of study have aimed to embrace this challenge, and most of these works are proposing to further diversify the datasets used at the evaluation time. We notice these works mostly fall into two main categories: (1) the works that study the performances over testing datasets generated by predefined perturbation over the original i.i.d datasets, such as adversarial robustness (Szegedy et al., 2013; Goodfellow et al., 2015) or robustness against certain noises (Geirhos et al., 2019; Hendrycks & Dietterich, 2019; Wang et al., 2020b); and (2) the works that study the performances over testing datasets that are collected anew with a procedure/distribution different from the one for training sets, such as domain adaptation (Ben-David et al., 2007; 2010) and domain generalization (Muandet et al., 2013).
Both of these lines, while pushing the study of robustness evaluation further, mostly have their own advantages and limitations as a tradeoff on how to guarantee the underlying causal structure of evaluation samples will be the same as the training samples: perturbation based evaluations usually maintain the causal structure by predefining the perturbations to be within a set of operations that will not alter the image semantics when applied, such as ℓ-norm ball constraints (Carlini et al., 2019), or texture (Geirhos et al., 2019), frequency-based (Wang et al., 2020b) perturbations; on the other hand, new-dataset based evaluations can maintain the causal structure by soliciting the efforts to human annotators to construct datasets with the same semantics, but significantly different styles (Hendrycks et al., 2021b; Hendrycks & Dietterich, 2019; Wang et al., 2019; Gulrajani & Lopez-Paz, 2020; Koh et al., 2021; Ye et al., 2021). More details of these lines and their advantages and limitations and how our proposed evaluation protocol will contrast them will be discussed in the next section.
In this paper, we investigate how to diversify the robustness evaluation datasets to make the evaluation results credible and representative. As shown in Figure 1, we aim to integrate the advantages of the above two directions by introducing a new protocol to generate evaluation datasets that can automatically perturb the samples to be sufficiently different from existing test samples, while maintaining the underlying unknown causal structure with respect to an oracle (we use a CLIP model in this paper). Based on the new evaluation protocol, we introduce a new robustness measurement that directly measures the robustness compared with the oracle. With our proposed evaluation protocol and metric, we give a study of current robust machine learning techniques to identify the robustness gap between existing models and the oracle. This is particularly important if the goal of a research direction is to produce models that function reliably to have performance comparable to the oracle.
Therefore, our contributions in this paper are three-fold:
• We introduce a new robustness measurement that directly measures the robustness gap between models and the oracle.
• We introduce a new evaluation protocol to generate evaluation datasets that can automatically perturb the samples to be sufficiently different from existing test samples, while maintaining the underlying unknown causal structure.
• We leverage our evaluation metric and protocol to offer a study of current robustness research to identify the robustness gap between existing models and the oracle. Our findings further bring us understandings and conjectures of the behaviors of the deep learning models.
2 BACKGROUND
2.1 CURRENT ROBUSTNESS EVALUATION PROTOCOLS
The evaluation of machine learning models in non-i.i.d scenario have been studied for more than a decade, and one of the pioneers is probably domain adaptation (Ben-David et al., 2010). In
domain adaptation, the community trains the model over data from one distribution and test the model with samples from a different distribution; in domain generalization (Muandet et al., 2013), the community trains the model over data from several related distributions and test the model with samples from yet another distribution. To be more specific, a popular benchmark dataset used in domain generalization study is the PACS dataset (Li et al., 2017), which consists the images from seven labels and four different domains (photo, art, cartoon, and sketch), and the community studies the empirical performance of models when trained over three of the domains and tested over the remaining one. To facilitate the development of cross-domain robust image classification, the community has introduced several benchmarks, such as PACS (Li et al., 2017), ImageNet-A (Hendrycks et al., 2021b), ImageNet-C (Hendrycks & Dietterich, 2019), ImageNet-Sketch (Wang et al., 2019), and collective benchmarks integrating multiple datasets such as DomainBed (Gulrajani & Lopez-Paz, 2020), WILDS (Koh et al., 2021), and OOD Bench (Ye et al., 2021).
While these datasets clearly maintain the underlying causal structure of the images, a potential issue is that these evaluation datasets are fixed once collected. Thus, if the community relies on these fixed benchmarks repeatedly to rank methods, eventually the selected best method may not be a true reflection of the world, but a model that can fit certain datasets exceptionally well. This phenomenon has been discussed by several textbooks (Duda et al., 1973; Friedman et al., 2001). While recent efforts in evaluating collections of datasets (Gulrajani & Lopez-Paz, 2020; Koh et al., 2021; Ye et al., 2021) might alleviate the above potential hazards of “model selection with test set”, a dynamic process of generating evaluation datasets will certainly further mitigate this issue.
On the other hand, one can also test the robustness of models by dynamically perturbing the existing datasets. For example, one can test the model’s robustness against rotation (Marcos et al., 2016), texture (Geirhos et al., 2019), frequency-perturbed datasets (Wang et al., 2020b), or adversarial attacks (e.g., ℓp-norm constraint perturbations) (Szegedy et al., 2013). While these tests do not require additionally collected samples, these tests typically limit the perturbations to be relatively well-defined (e.g., a texture-perturbed cat image still depicts a cat because the shape of the cat is preserved during the perturbation).
While this perturbation test strategy leads to datasets dynamically generated along the evaluation, it is usually limited by the variations of the perturbations allowed. For example, one may not be able to use some significant distortion of the images in case the object depicted may be deformed and the underlying causal structure of the images are distorted. More generally speaking, most of the current perturbation-based test protocols are scoped by the tradeoff that a minor perturbation might not introduce enough variations to the existing datasets, while a significant perturbation will potentially destroy the underlying causal structures.
2.2 ASSUMED DESIDERATA OF ROBUSTNESS EVALUATION PROTOCOL
As a reflection of the previous discussion, we attempt of offer a summary list of three desired properties of the datasets serving as the benchmarks for robustness evaluation:
• Stableness in Causal Structure: the most important property of the evaluation datasets is that the samples must represent the same underlying causal structure as the one in the training samples.
• Diversity in Generated Samples: for any other non-causal factors of the data, the test samples should cover as many as possible scenarios of the images, such as texture, styles etc.
• A Dynamic Generation Process: to mitigate selection bias of the models over techniques that focus too attentively to the specification of datasets, ideally, the evaluation protocol should consist of a dynamic set of samples, preferably generated with the tested model in consideration.
Key Contribution: To the best of our knowledge, there are no other evaluation protocols of model robustness that can meet the above three properties simultaneously. Thus, we aim to introduce a method that can evaluate model’s robustness that fulfill the three above desiderata at the same time.
2.3 NECESSITY OF NEW ROBUSTNESS MEASUREMENT IN DYNAMIC EVALUATION PROTOCOL
In previous experiments, we always have two evaluation settings: the “standard” test set, and the perturbed test set. When comparing the robustness of two models, prior arts would be to rank the models by their accuracy under perturbed test set (Geirhos et al., 2019; Hendrycks et al., 2021a;
Orhan, 2019; Xie et al., 2020; Zhang, 2019) or other quantities distinct from accuracy, e.g., inception score (Salimans et al., 2016), effective robustness (Taori et al., 2020) and relative robustness (Taori et al., 2020). These metrics are good starting points for experiments since they are precisely defined and easy to apply to evaluate robustness interventions. In the dynamic evaluation protocols, however, these quantities alone cannot provide a comprehensive measure of robustness, as two models are tested on two different “dynamical” test sets. When one model outperforms the other, we cannot distinguish whether one model is actually better than the other, or if the test set happened to be easier.
The core issue in the preceding example is that we can not find the consistent robustness measurement between two different test sets. In reality, an ideal robust model will probably behave similarly to the oracle (e.g., the human users). Thus, instead of indirectly comparing models’ robustness with each other, a measurement that directly measures models’ robustness compared with the oracle is desired.
3 METHOD - COUNTERFACTUAL GENERATION WITH SURROGATE ORACLE
3.1 METHOD OVERVIEW
We use (x,y) to denote an image sample and its corresponding label, use θ(x) to denote the model we aim to evaluate, which takes an input of the image and predicts the label.
Algorithm 1 Counterfactual Image Generation with Surrogate Oracle
Input: (X,Y), θ, g, h, total number of iterations B Output: generated dataset (X̂,Y) for each (x,y) in (X,Y) do
generate x̂0 = g(x,b0) if h(x̂0) = y then
set x̂ = x̂0 for iteration bt < B do
generate x̂t = g(x̂t−1,bt) if h(x̂t) = y then
set x̂ = x̂t else
set x̂ = x̂t−1 exit FOR loop
end if end for
else set x̂ = x end if use (x̂,y) to construct (X̂,Y)
end for
We use g(x,b) to denote an image generation system, which takes an input of the starting image x to generate the another image x̂ within the computation budget b. The generation process is performed as an optimization process to maximize a scoring function α(x̂, z) that evaluate the alignment between the generated image and generation goal z guiding the perturbation process. The higher the score is, the better the alignment is. Thus, the image generation process is formalized as
x̂ = argmax x̂=g(x,b),b<B α(g(x,b), z),
where B denotes the allowed computation budget for one sample. This budget will constrain the generated image not far from the starting image so that the generated one does not converge to a trivial solution that maximizes the scoring the function.
In addition, we choose the model classification loss l(θ(x̂),y) as z. Therefore, the scoring function essentially maximizes the loss of a given image in the direction of a different class.
Finally, to maintain the unknown causal structure of the images, we leverage the power of the pretrained giant models to scope the generation process: the generated images must be considered within the same class by the pretrained model, denoted as h(x̂), which takes in the input of the image and makes a prediction.
Connecting all the components above, the generation process will aim to optimize the following:
x̂ = argmax x̂=g(x,b),b<B,z=l(θ(x̂),y) α(g(x,b), z), subject to h(x̂) = y.
Our method is generic and agnostic to the choices of the three major components, namely θ, g, and h. For example, the g component can vary from something as simple as basic transformations adding noises or rotating images to a sophisticated method to transfer the style of the images; on the other hand, the h component can vary from an approach with high reliability and low efficiency such as
actually outsourcing the annotation process to human labors to the other polarity of simply assuming a large-scale pretrained model can function plausibly as a human.
In the next part, we will introduce our concrete choices of g and h leading to the later empirical results, which build upon the recent advances of vision research.
3.2 ENGINEERING SPECIFICATION
We use VQGAN (Esser et al., 2021) as the image generation system g(x,b), and the g(x,b) is boosted by the evaluated model θ(x) serving as the α(x̂, z) to guide the generation process, where z = l(θ(x̂),y) is the model classification loss on current perturbed images.
The generation is an iterative process guided by the scoring function: at each iteration, the system add more style-wise transformations to the result of the previous iteration. Therefore, the total number of iterations allowed is denoted as the budget B (see Section 4.5 and Appendix H for details of finding the best perturbation). In practice, the value of budget B is set based on the resource concerns.
To guarantee the causal structure of images, we use a CLIP (Radford et al., 2021) model to serve as h, and design the text fragment input of CLIP to be “an image of {class}”. We directly optimize VQGAN encoder space which guided by our scoring function. We show the algorithm in Algorithm 1.
3.2.1 SPARSE SUBMODEL OF VQGAN FOR EFFICIENT PERTURBATION
While our method will function properly as described above, we notice that the generation process still have a potential limitation: the bound-free perturbation of VQGAN will sometimes perturb the semantics of the images, generating results that will be rejected by the oracle later and thus leading to a waste of computational efforts.
To counter this challenge, we use a sparse variable selection method to analyze the embedding dimensions of VQGAN to identify a subset of dimensions that are mainly responsible for the non-semantic variations.
In particular, with a dataset (X,Y) of n samples, we first use VQGAN to generate a style-transferred dataset (X′,Y). During the generation process, we preserve the latent representations of input samples after the VQGAN encoder in the original dataset. We also preserve the final latent representations before the VQGAN decoder that are quantized after the iterations in the style-transferred dataset. Then, we create a new dataset (E,L) of 2n samples, for each sample (e, l) ∈ (E,L), e is the latent representation for the sample (from either the original dataset or the style-transferred one), and l is labelled as 0 if the sample is from the original dataset and 1 if the style-transferred dataset.
Then, we train ℓ1 regularized logistic regression model to classify the samples of (E,L). With w denoting the weights of the model, we solve the following problem
argmin w ∑ (e,l)∈(E,L) l(ew, l) + λ∥w∥1,
and the sparse pattern (zeros or not) of w will inform us about which dimensions are for the style.
3.3 MEASURING ROBUSTNESS
Oracle-oriented Robustness (OOR). By design, the causal structures of counterfactual images will be maintained by the oracle. Thus, if a model has a smaller accuracy drop on the counterfactual images, it means that the model makes more similar predictions to oracle compared to a different model. To precisely define OOR, we introduce counterfactual accuracy (CA), the accuracy on the counterfactual images that our generative model successfully produces. As SA may influence CA to some extent, to disentangle CA from SA, we normalize CA with SA as OOR:
OOR = CA
SA ×100%
In settings where the oracle is human labors, OOR measures the robustness difference between the evaluated model and human perception. In our experiment setting, OOR measures the robustness difference between models trained on fixed datasets (the evaluated model) and the model trained on unfiltered, highly varied, and highly noisy data (the oracle CLIP model).
3.4 THE NECESSITY OF THE SURROGATE ORACLE
At last, we devote a short paragraph to reminder some readers that, despite the alluring idea of designing systems that forgo the usages of underlying causal structure or oracle, it has been proved or argued multiple times that it is impossible to create that knowledge with nothing but data, in either context of machine learning (Locatello et al., 2019; Mahajan et al., 2019; Wang et al., 2021) or causality (Bareinboim et al., 2020; Xia et al., 2021),(Pearl, 2009, Sec. 1.4).
4 EXPERIMENTS - EVALUATION AND UNDERSTANDING OF MODELS
4.1 EXPERIMENT SETUP
We consider four different scenarios, ranging from the basic benchmark MNIST (LeCun et al., 1998), through CIFAR10 (Krizhevsky et al., 2009), 9-class ImageNet (Santurkar et al., 2019), to full-fledged 1000-class ImageNet (Deng et al., 2009). For ImageNet, we resize all images to 224× 224 px. We also center and re-scale the color values with µRGB = [0.485, 0.456, 0.406] and σ = [0.229, 0.224, 0.225]. The total number of iterations allowed (computation budget B) of our evaluation protocol is set as 10. We conduct the experiments on a NVIDIA GeForce RTX 3090 GPU.
For each of the experiment, we report a set of four results:
• Standard Accuracy (SA): reported for references.
• Validation Rate (VR): the percentage of images validated by the oracle that maintains the causal structure.
• Oracle-oriented Robustness (OOR): the robustness of the model compared with the oracle.
4.2 ROBUSTNESS EVALUATION FOR STANDARD VISION MODELS
We consider a large range of models (Appedix J) and evaluate pre-trained variants of a LeNet architecture (LeCun et al., 1998) for the MNIST experiment and ResNet architecture (He et al., 2016a) for the remaining experiments. For ImageNet experiment, we also consider pretrained transformer variants of ViT (Dosovitskiy et al., 2020), Swin (Liu et al., 2021), Twins (Chu et al., 2021), Visformer (Chen et al., 2021) and DeiT (Touvron et al., 2021) from the timm library (Wightman, 2019). We evaluate the most recent ConvNeXt (Liu et al., 2022) as well. All models are trained on the ILSVRC2012 subset of IN comprised of 1.2 million images in the training and a total of 1000 classes (Deng et al., 2009; Russakovsky et al., 2015).
We report our results in Table 1. As expected, these models can barely maintain its performances when tested on data from different distributions, as shown by many previous works (e.g., Geirhos et al., 2019; Hendrycks & Dietterich, 2019; Wang et al., 2020b).
Interestingly, on ImageNet, though both transformer-variants models and vanilla CNNarchitecture model, i.e., ResNet, attain similar clean image accuracy, transformer-variants substantially outperforms ResNet50 in terms of OOR under our dynamic evaluation protocol. We conjecture such performance gap is partly originated from the differences in training setups; more specifically, it may be resulted by the fact transformer-variants by default use strong data augmentation strategies while ResNet50 use none of them. The augmentation strategies (e.g., Mixup (Zhang et al., 2017), Cutmix (Yun
et al., 2019) and Random Erasing (Zhong et al., 2020), etc.) already naively introduce out-ofdistribution (OOD) samples during training, therefore are potentially helpful for securing model
robustness towards data shifts. When equiping with the similar data augmentation strategies, CNNarchitecture model, i.e., ConvNext, has achieved comparable performance in terms of OOR. This hypothesis has also been verified in recent works (Bai et al., 2021; Wang et al., 2022). We will offer more discussions on the robustness enhancing methods in Section 4.3.
Besides comparing performance between different standard models, OOR brings us the chance to directly compare models with the oracle. Across all of our experiments, the OOR shows the significant gap between models and the oracle, which is trained on the unfiltered and highly varied data, seemingly suggesting that training with a more diverse dataset would help with robustness. This overarching trend has also been identified in (Taori et al., 2020). However, quantifying when and why training with more data helps is still an interesting open question.
We also notice that the VR tends to be different for different datasets. We conjecture this is due to how the oracle model understands the images and labels, more discussions is offered in Section 5.
4.3 ROBUSTNESS EVALUATION FOR ROBUST VISION MODELS
Recently, some techniques have been introduced to cope with corruptions or style shifts. For example, by adapting the batch normalization statistics with a limited number of samples (Schneider et al., 2020), the performance on stylized images (or corrupted images) can be significantly increased. Additionally, some more sophisticated techniques, e.g., AugMix (Hendrycks et al., 2019), have also been widely employed by the community.
To investigate whether those OOD robust models can still maintain the performance under our dynamic evaluation protocol, we evaluate the pretrained ResNet50 models combining with the four leading methods from the ImageNet-C leaderboard, namely Stylized ImageNet training (SIN; (Geirhos et al., 2019)), adversarial noise training (ANT; (Rusak et al.)) as well as a combination of ANT and SIN (ANT+SIN; (Rusak et al.)), optimized data augmentation using Augmix (AugMix; (Hendrycks et al., 2019)), DeepAugment (DeepAug; (Hendrycks et al., 2021a)) and a combination of Augmix and DeepAugment (DeepAug+AM; (Hendrycks et al., 2021a)).
The results are displayed in Table 2. Surprisingly, we find that some common corruption robust models, i.e., SIN, ANT, ANT+SIN, fail to maintain their power under our dynamic evaluation protocol. We take the SIN method as an example. The OOR of SIN method is 42.92, which is even lower than that of a vanilla ResNet50. As these methods are well fitted in the benchmark ImageNet-C, such results verify the weakness of relying on fixed benchmarks to rank methods. The selected best method may not be a true reflection of the real world, but a model well fit certain datasets, which in turn proves the necessity of our dynamic evaluation protocol.
DeepAug, Augmix and DeepAug+AM perform better than SIN and ANT methods in terms of OOR as they are dynamically perturbing the datasets, which alleviates the hazards of “model selection with test set” to some extent. However, their performance is limited by the variations of the perturbations allowed, resulting in only a marginal improvement compared with the ResNet50 under our evaluation protocol.
In addition, we also visualize the counterfactual images generated according to the evaluated styleshift robust models in Figure 2. More results are shown in Appendix L. Specifically, we have the following observations:
Preservation of local textual details. A number of recent empirical findings point to an important role of object textures for CNN, where object textures are more important than global object shapes for CNN model to learn (Gatys et al., 2015; Ballester & Araujo, 2016; Gatys et al., 2017; Brendel & Bethge, 2019; Geirhos et al., 2019; Wang et al., 2020b). We notice our generated counterfactual images may preserve false local textual details, the evaluation task will become much harder since textures are no longer predictive, but instead a nuisance factor (as desired). For the counterfactual
image generated for the DeepAug method (Figure 2f), we produce a skin texture similar to chicken skin, and the fish head becomes more and more chicken-like. ResNet with DeepAug method is misled by this corruption.
Generalization to shape perturbations. Moreover, since our attack intensity could be dynamically altered based on the model’s gradient while still maintaining the causal structures, the perturbation we produce would be sufficiently that not just limited to object textures, but even be a certain degree of shape perturbation. As it is acknowledged that networks with a higher shape bias are inherently more robust to many different image distortions and reach higher performance on classification and classification tasks, we observe that the counterfactual image generated for SIN (Figure 2b and Figure 2i) and ANT+SIN (Figure 2d and Figure 2k) methods are shape-perturbed and successfully attack the models.
Recognition of model properties. With the combination of different methods, the counterfactual images generated would be more comprehensive. For example, the counterfactual image generated for DeepAug+AM (Figure 2g) would preserve the chicken-like head of DeepAug’s and skin patterns of Augmix’s. As our evaluation method does not memorize the model it evaluated, this result reveals that our method could recognize the model properties, and automatically generate those hard counterfactual images to complete the evaluation.
Overall, these visualizations reveal that our dynamic evaluation protocol dynamically adjusts attack strategies based on different model properties, and automatically generates diversified counterfactual images that complements static benchmark, i.e., ImageNet-C, to expose weaknesses for models.
4.4 UNDERSTANDING THE PROPERTIES OF OUR EVALUATION SYSTEM
We continue to investigate several properties of the models in the next couple sections. To save space, we will mainly present the results on CIFAR10 experiment here and save the details to the appendix:
• In Appendix B, we explored the transferability of the generated images. The results of a reasonable transferability suggests that our method of generating images can be potentially used in a broader scope: we can also leverage the method to generate a static set of images and set a benchmark dataset to help the development of robustness methods.
• In Appendix C, we test whether initiating the perturbation process with an adversarial example will further degrade the OOR. We find that initiaing with the FGSM adversarial examples (Goodfellow et al., 2015) barely affect the OOR.
• In Appendix D, we compare the vanilla model to a model trained by PGD (Madry et al., 2017). We find that the adversarially trained model and vanillaly trained model process the data differently. However, their robustness weak spots are exposed to a similar degree by our test system.
• In Appendix E, we explored the possibility of improving the evaluated robustness by augmenting the images with the images generated by our evaluation system. However, due to the required
computational load, we only use a static set of generated images to train the model and the results suggest that static set of images for augmentation cannot sufficiently robustify the model to our evaluation system.
• We also notice that the generated images tend to shift the color of the original images, so we tested the robustness of grayscale models in Appendix F, the results suggest removing the color channel will not improve robustness performances.
4.5 EXPERIMENTS REGARDING METHOD CONFIGURATION
Generator Configuration. We conduct ablation study on the generator choice to agree on the performance ranking in Table 1 and Table 2. We consider several image generator architechitures, namely, variational autoencoder (VAE) (Kingma & Welling, 2013; Rezende et al., 2014) like Efficient-VDVAE (Hazami et al., 2022), diffusion models (Sohl-Dickstein et al., 2015) like Improved DDPM (Nichol & Dhariwal, 2021) and ADM (Dhariwal & Nichol, 2021), and GAN like StyleGANXL (Sauer et al., 2022). As shown in Table 3, the validation rate of the oracle stays stable across all the image generators. We find that the conclusion is consistent under different generator choices, which validates the correctness of our conclusions in Section 4.2 and Section 4.3.
Sparse VQGAN. In experiments of sparse VQGAN, we find that only 0.69% dimensions are highly correlated to the style. Therefore, we mask the rest 99.31% dimensions to create a sparse submodel of VQGAN for efficient perturbation. The running time can be reduced by 12.7% on 9-class ImageNet and 28.5% on ImageNet, respectively. Details can be found in Appendix G.
Step size. We experiment on the perturbation step size to find the best perturbation under the computation budget B. We find that too small or large step size lead to slight perturbation strength while stronger image perturbation could be generated when the step size stays in a mild range, i.e., 0.1 and 0.2. Details of our experiments on step size can be found in Appendix H.
5 DISCUSSION AND CONCLUSION
Potential limitation. We notice that, the CLIP model has been influenced by the imbalance sample distributions across the internet. We provide the details of test on 9-class ImageNet for vanilla ResNet-18 in Appendix I. We observe that the oracle model can tolerate a much more significant perturbation over samples labelled as Dog (VR 0.95) or Cat (VR 0.94) than samples labelled as Primate (VR 0.48). The OOR value for Primate images are much higher than other categories, creating an illusion that the evaluated models are robust against perturbed Primate images. However, such an illusion is caused by the limitation of the pretrained models that the oracle could only handle slightly perturbed samples.
The usage of oracle. Is it cheating to use oracle? The answer might depend on perspectives, but we hope to remind some readers that, in general, it is impossible to maintain the underlying causal structure during perturbation without prior knowledge (Locatello et al., 2019; Mahajan et al., 2019; Wang et al., 2021; Bareinboim et al., 2020; Xia et al., 2021),(Pearl, 2009, Sec. 1.4).
Conclusion. To conclude, in this paper, we first summarized the common practices of model evaluation strategies for robust vision machine learning. We then discussed three desiderata of the robustness evaluation protocol. Further, we offered a simple method that can fulfill these three desiderata at the same time, serving the purpose of evaluating vision models’ robustness across generic i.i.d benchmarks, without requirement on the prior knowledge of the underlying causal structure depicted by the images, although relying on a plausible oracle.
ETHICS STATEMENT
The primary goal of this paper is to introduce a new evaluation protocol for vision machine learning research that can generate sufficiently perturbed samples from the original samples while maintaining the causal structures by assuming an oracle. Thus, we can introduce significant variations of the existing data while being free from additional human efforts. With our approach, we hope to renew the benchmarks for current robustness evaluation, offer understandings of the behaviors of deep vision models and potentially facilitate the generation of more truly robust models. Increasing the robustness of vision models can enhance their reliability and safety, which leads to the trustworthy artificial intelligence and contributes to a wide range of application scenarios (e.g., manufacturing automation, surveillance systems, etc.). Manufacturing automation can improve the production efficiency, but may also trigger social issues related to job looses and industrial restructuring. Advanced surveillance systems are conducive to improving social security, but may also raise public concerns about personal privacy violations.
We encourage further work to understand the limitations of machine vision models in OOD settings. More robust models carry the potential risk of automation bias, i.e., an undue trust in vision models. However, even if models are robust against corruptions in finite OOD datasets, they might still quickly fail on the massive generic perturbations existing in the real-world data space, i.e., the perturbations offered by our approach. Understanding under what conditions model decisions can be deemed reliable or not is still an open research question that deserves further attention.
REPRODUCIBILITY STATEMENT
Please refer to Appendix J for the references of all models we evaluated and links to the corresponding source code.
A NOTES ON THE EXPERIMENTAL SETUP
A.1 NOTES ON MODELS
Note that we only re-evaluate existing model checkpoints, and hence do not perform any hyperparameter tuning for evaluated models. Since it is possible to work with a small amount of GPU resources, our model evaluations are done on a single NVIDIA GeForce RTX 3090 GPU.
A.2 HYPERPARAMETER TUNING
Our method is generally parameter-free except for the computation budget and perturbation step size. In our experiments, the computation budget is the maximum iteration number of Sparse VQGAN. We consider the predefined value to be 10, as it guarantees the degree of perturbation with acceptable time costs. We provide the experiment for step size configuration in Section 4.5.
B TRANSFERABILITY OF GENERATED IMAGES
We first study whether our generated images are model specific, since the generation of the images involves the gradient of the original model. We train several architectures, namely EfficientNet (Tan & Le, 2019), MobileNet (Howard et al., 2017), SimpleDLA (Yu et al., 2018), VGG19 (Simonyan & Zisserman, 2014), PreActResNet (He et al., 2016b), GoogLeNet (Szegedy et al., 2015), and DenseNet121 (Huang et al., 2017) and test these models with the images. We also train another ResNet following the same procedure to check the transferability across different runs in one architecture.
Table 4: Performances of transferability.
Model SA OOR ResNet 95.38 54.17 EfficientNet 91.37 68.48 MobileNet 91.63 68.72 SimpleDLA 92.25 66.16 VGG 93.54 70.57 PreActResNet 94.06 67.25 ResNet 94.67 66.23
GoogLeNet 95.06 66.68 DenseNet 95.26 66.43
Table 4 shows a reasonable transferability of the generated images as the OOR are all lower than the SA, although we can also observe an improvement over the OOR when tested in the new models. These results suggest that our method of generating images can be potentially used in a broader scope: we can also leverage the method to generate a static set of images and set a benchmark dataset to help the development of robustness methods.
In addition, our results might potentially help mitigate a debate on whether more accurate architectures are naturally more robust: on one hand, we have results showing that more accurate architectures indeed lead to better empirical performances on certain (usually fixed) robustness benchmarks (Rozsa et al., 2016; Hendrycks & Dietterich, 2019); while on the other hand, some counterpoints suggest the higher robustness numerical performances are only because these models capture more non-robust features that also happen exist in the fixed benchmarks (Tsipras et al., 2018; Wang et al., 2020b; Taori et al., 2020). Table 4 show some examples to support the latter argument: in particular, we notice that VGG, while ranked in the middle of the accuracy ladder, interestingly stands out when tested with generated images. These results continue to support our argument that a dynamic robustness test scenario can help reveal more properties of the model.
C INITIATING WITH ADVERSARIAL ATTACKED IMAGES
Since our method using the gradient of the evaluated model reminds readers about the gradient-based attack methods in adversarial robustness literature, we test whether initiating the perturbation process with an adversarial example will further degrade the accuracy.
We first generate the images with FGSM attack (Goodfellow et al., 2015). Table 5. shows that initiating with the FGSM adversarial examples barely affect the OOR, which
is probably because the major style-wise perturbation will erase the imperceptible perturbations the adversarial examples introduce.
D ADVERSARIALLY ROBUST MODELS
With evidence suggesting the adversarially robust models are considered more human perceptually aligned (Engstrom et al., 2019; Zhang & Zhu, 2019; Wang et al., 2020b), we compare the vanilla model to a model trained by PGD (Madry et al., 2017) (ℓ∞ norm smaller than 0.03).
Table 6: Performances comparison with vanilla model and PGD trained model.
Data Model SA OOR Van. Van. 95.38 57.79PGD 85.70 95.96 PGD Van. 95.38 81.73PGD 85.70 66.18
As shown in Table 6, adversarially trained model and vanillaly trained model indeed process the data differently: the transferability of the generated images between these two regimes can barely hold. In particular, the PGD model can almost maintain its performances when tested with the images generated by the vanilla model.
However, despite the differences, the PGD model’s robustness weak spots are exposed to a similar degree with the vanilla model by our test system: the OOR of the vanilla model and the PGD model are only 57.79 and 66.18, respectively. We believe this result can further help advocate our belief that the robustness test needs to be a dynamic process generating images conditioning on the model to test, and thus further help validate the importance of our contribution.
E AUGMENTATION THROUGH STATIC ADVERSARIAL TRAINING
Intuitively, inspired by the success of adversarial training (Madry et al., 2017) in defending models against adversarial attacks, a natural method to improve the empirical performances under our new test protocol is to augment the training data with counterfactual training images generated by the same process. We aim to validate the effectiveness of this method here.
However, the computational load of generation process is not ideal to serve the standard adversarial training strategy, and we can only have one copy of the counterfactual training samples. Fortunately, we notice that some recent advances in training with data augmentation can help learn robust representations with a limited scope of augmented samples (Wang et al., 2020a), which we use here.
We report our results in Table 7. The first thing we observe is that the model trained with the augmentation data offered through our approach could preserve a relatively higher performance (OOR 89.10) when testing with the counterfactual images generated according to the vanilla model. Since we have shown the counterfactual samples have a reasonable transferability in the main manuscript, this result indicates the robustness we brought when training with the counterfactual images generated by our approach.
In addition, when tested with the counterfactual images generated according to the augmented model, both models’ performance would drop significantly, which again indicates the effectiveness of our approach.
F GRAYSCALE MODELS
Our previous visualization suggests that a shortcut the counterfactual generation system can take is to significantly shift the color of the images, for which a grey-scale model should easily maintain the performance. Thus, we train a grayscale model by changing the ResNet input channel to be 1 and
transforming the input images to be grayscale upon feeding into the model. We report the results in Table 8.
Interestingly, we notice that the grayscale model cannot defend against the shift introduced by our system by ignoring the color information. On the contrary, it seems to encourage our system to generate more counterfactual images that can lower the performances.
In addition, we visualize some counterfactual images generated according to each model and show them in Figure 3. We can see some evidence that the graycale model forces the generation system to focus more on the shape of the object and less of the color of the images. We find it particularly interesting that our system sometimes generates different images differently for different models while the resulting images deceive the respective model to make the same prediction.
G EXPERIMENTS TO SUPPORT SPARSE VQGAN
We generate the flattened latent representations of input images after the VQGAN Encoder with negative labels. Following Algorithm 1, we generate the flattened final latent representations before the VQGAN decoder with positive labels. Altogether, we form a binary classification dataset where the number of positive and negative samples is balanced. The positive samples are the latent representations of counterfactual images while the negative samples are the latent representations of input images. We set the split ratio of train and test set to be 0.8 : 0.2. We perform the explorations on various datasets, i.e. MNIST, CIFAR-10, 9-class ImageNet and ImageNet.
The classification model we consider is LASSO1 as it enables automatically feature selection with strong interpretability. We set the regularization strength to be 36.36. We adopt saga (Defazio et al., 2014) as the solver to use in the optimization process. The classification results are shown in Table 9.
1Although LASSO is originally a regression model, we probabilize the regression values to get the final classification results.
We observe that the coefficient matrix of features can be far sparser than we expect. We take the result of 9-class ImageNet as an example. Surprisingly, we find that almost 99.31% dimensions in average could be discarded when making judgements. We argue the preserved 0.69% dimensions are highly correlated to VQGAN perturbation. Therefore, we keep the corresponding 99.31% dimensions unchanged and only let the rest 0.69% dimensions participate in computation. Our computation loads could be significantly reduced while still maintain the competitive performance compared with the unmasked version2.
We conduct the run-time experiments on a single NVIDIA GeForce RTX 3090 GPU. Following our experiment setting, we evaluate a vanilla ResNet-18 on 9-class ImageNet and a vanilla ResNet-50 on ImageNet. As shown in Table 10, the run-time on ImageNet can be reduced by 28.5% with our sparse VQGAN. Compared with large-scale masked dimensions (i.e., 99.31%), we attribute the relatively incremental run-time improvement (i.e., 12.7% on 9-class ImageNet, 28.5% on ImageNet) to the fact that we have to perform mask and unmask operations each time when calculating the model gradient, which offsets the calculation efficiency brought by the sparse VQGAN to a certain extent.
H PARAMETER STUDY ON STEP SIZE
We conduct the parameter study of the perturbation step size for our evaluation system on the CIFAR10 dataset. Specifically, we tune the step size in {0.01, 0.05, 0.1, 0,2, 0.5}. The maximum iteration (computation budget B) is set to be 10. All results are produced based on the ResNet18 and averaged over five runs.
As shown in Figure 4, we observe that when the step size is too small, i.e., 0.01 and 0.05, the strength of perturbation cannot be achieved within the predefined maximum iterations, resulting in the higher score of OOR. In addition, large step size will also lead to higher OOR score. When the step size is large, i.e., 0.5, the perturbation is likely to stop after only a few iterations. This could also lead to small perturbation strength compared with the scenario where we use relatively small step size but more iterations. When the step size is 0.01, the model seems achieves the oracle-parallel performance (OOR 99.66). However, such OOR values would become meaningless due to the small perturbation strength. Moreover, when the step size stays in a mild range, i.e., 0.1 and 0.2, stronger image perturbation could be generated, while the performances at this
range stay constant. Therefore, we choose the step size of 0.1 for the experiments.
2We note that the overlapping degree of the preserved dimensions for each dataset is not high, which means that we need to specify these dimensions when facing new datasets.
I ANALYSIS OF SAMPLES THAT ARE MISCLASSIFIED BY THE MODEL
We present the results on 9-class ImageNet experiment to show the details for each category.
Table 11 shows that the VR values for most categories are still higher than 80%, some even reach 95%, which means we produce sufficient number of counterfactual images. However, we notice that the VR value for primate images is quite lower compared with other categories, indicating around 52% perturbed primate images are blocked by the orcle. We have discussed this category unbalance issue in Section 5.
As shown in Table 11, the OOR value for each category significantly drops compared with the SA value, indicating the weakness of trained models. An interesting finding is that the OOR value for Primate images are quite higher than other categories, given the fact that more perturbed Primate images are blocked by the oracle. We attribute it to the limitation of foundation models. As the CLIP model has been influenced by the imbalance sample distributions across the Internet, it could only handle easy perturbed samples well. Therefore, the counterfactual images preserved would be those that can be easily classified by the models.
J LIST OF EVALUATED MODELS
The following lists contains all models we evaluated on various datasets with references and links to the corresponding source code.
J.1 PRETRAINED VQGAN MODEL
We use the checkpoint of vqgan_imagenet_f16_16384 from https://heibox. uni-heidelberg.de/d/a7530b09fed84f80a887/
J.2 PRETRAINED CLIP MODEL
Model weights of ViT-B/32 and usage code are taken from https://github.com/openai/ CLIP
J.3 TIMM MODELS TRAINED ON IMAGENET (WIGHTMAN, 2019)
Weights are taken from https://github.com/rwightman/pytorch-image-models/ tree/master/timm/models
1. ResNet50 (He et al., 2016a) 2. ViT (Dosovitskiy et al., 2020) 3. DeiT (Touvron et al., 2021) 4. Twins (Chu et al., 2021)
5. Visformer (Chen et al., 2021) 6. Swin (Liu et al., 2021) 7. ConvNeXt (Liu et al., 2022)
J.4 ROBUST RESNET50 MODELS
1. ResNet50 SIN+IN (Geirhos et al., 2019) https://github.com/rgeirhos/ texture-vs-shape
2. ResNet50 ANT (Rusak et al.) https://github.com/bethgelab/ game-of-noise
3. ResNet50 ANT+SIN (Rusak et al.) https://github.com/bethgelab/ game-of-noise
4. ResNet50 Augmix (Hendrycks et al., 2019) https://github.com/ google-research/augmix
5. ResNet50 DeepAugment (Hendrycks et al., 2021a) https://github.com/ hendrycks/imagenet-r
6. ResNet50 DeepAugment+Augmix (Hendrycks et al., 2021a) https://github.com/ hendrycks/imagenet-r
J.5 ADDITIONAL IMAGE GENERATORS
1. Efficient-VDVAE (Hazami et al., 2022) https://github.com/Rayhane-mamah/ Efficient-VDVAE
2. Improved DDPM (Nichol & Dhariwal, 2021) https://github.com/open-mmlab/ mmgeneration/tree/master/configs/improved_ddpm
3. ADM (Dhariwal & Nichol, 2021) https://github.com/openai/ guided-diffusion
4. StyleGAN (Sauer et al., 2022) https://github.com/autonomousvision/ stylegan_xl
K LEADERBOARDS FOR ROBUST IMAGE MODEL
We launch leaderboards for robust image models. The goal of these leaderboards are as follows:
• To keep on track of state-of-the-art on each adversarial vision task and new model architectures with our dynamic evaluation process.
• To see the comparison of robust vision models at a glance (e.g., performance, speed, size, etc.). • To access their research papers and implementations on different frameworks.
We offer a sample of the robust ImageNet classification leaderboard in supplementary materials.
L ADDITIONAL COUNTERFACTUAL IMAGE SAMPLES
In Figure 5, we provide additional counterfactual images generated according to each model. We have similar observations to Section 4.3. First, the generated counterfactual images exhibit diversity that many other non-causal factors of the data would be covered, i.e., texture, shape and styles. Second, our method could recognize the model properties, and automatically generate those hard counterfactual images to complete the evaluation.
In addition, the generated images show a reasonable transferability in Table 4, indicating tha our method can be potentially used in a broader scope: we can also leverage the method to generate a static set of images and set a benchmark dataset to help the development of robustness methods. Therefore, we also offer two static benchmarks in supplementary materials that are generated based on CNN architecture, i.e., ConvNext and transformer variant, i.e., ViT, respectively. | 1. What is the main contribution of the paper regarding model robustness evaluation?
2. What are the strengths and weaknesses of the proposed approach, particularly in its ability to mitigate selection bias?
3. How effective is the dynamic data augmentation method in improving Oracle-oriented robustness (OOR)?
4. What are the limitations of using a single oracle model for evaluating OOR?
5. How does the sparse submodel of VQGAN affect the diversity of generated examples?
6. Are there any potential biases introduced by the choice of generative models or oracle models? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper questions the validity of fixed benchmarks in evaluating model robustness for real-world applications where the i.i.d. assumption may not hold. Instead, the authors propose to use dynamically perturbed test examples to evaluate each model separately. The perturbed examples must meet two requirements. First, they can still be correctly classified by an oracle model so that most of the underlying causal structure of the original examples are preserved. Second, the examples are perturbed in an iterative manner that maximizes the loss induced by these examples for each given model. A new metric, namely oracle-oriented robustness (OOR), is also proposed for this new evaluation protocol. The OOR of a model is defined as the accuracy of that model on the perturbed examples over the accuracy on the original test examples. With this new evaluation protocol, experiments on ImageNet suggest that strong data augmentation is a key indicator of high OOR despite the model architecture (CNN or ViT). In particular, dynamic data augmentation usually leads to higher OOR than pre-augmented data. Finally, another main empirical finding reported in the paper is that the choice of the generative model used for image perturbation does not affect OOR very much.
Strengths And Weaknesses
Strengths
The paper studies an important open problem: how to measure model robustness in a general out-of-distribution setting.
The idea of generating hard test examples for each individual model during evaluation and measuring the robustness of a model with respect to an oracle model is novel.
The paper is overall well-written and easy-to-understand.
Weaknesses
In section 2.2, the authors suggest that the dynamic generation process helps avoid the selection bias of the models. However, how much of the selection bias is avoided is unclear. Actually, it is arguable that current models are affected by the selection bias to a degree that requires any special treatment (see [1]). Besides, the choice of the generative model and the oracle model can also introduce selection bias, albeit less direct than fixed benchmarks. So, the proposed protocol is at best “mitigating” the issue rather than “avoiding” as stated in the paper. This weakens the significance of the paper.
Throughout all the experiments on ImageNet, the validation rate (VR) is almost the same for models of different architectures and data augmentation techniques. This begs the question whether the generated examples are sufficiently diverse so that they really expose different problems of each model. The provided examples show some visual variations but are not conclusive. Imagine that a model is much more robust than the other, then intuitively, the VR of the robust model should be lower than the VR of the other model because the generated examples for the robust model should be harder for the oracle model. But this is not the case for the models considered in the paper. Moreover, if the oracle model (e.g., CLIP) itself has some weaknesses, then a model that has the same weaknesses would probably have higher OOR than another model that does not share those weaknesses, given that their VR is the same. The paper does not mention if the OOR is sensitive to the choice of the oracle model. All these points are currently unclear and require further investigation.
Regarding the sparse submodel of VQGAN, the authors first use VQGAN to generate a style-transferred dataset (the first sentence of the third paragraph of section 3.2.1) which is then utilized to find out the feature dimensions related with style. This appears to be circular reasoning. If the style dimensions are unknown and need to be find out, how did the VQGAN generate a style-transferred dataset in the first place? What if there are also significant semantic changes? In that case, the pruned dimensions may also contain many relevent style dimensions. The paper reports that 99.31% of the dimensions are pruned on average, but it only leads to 28.5% runtime reduction on ImageNet. It is not clear if the sparse submodel is efficient enough considering the expense of possible reduction in the diversity of the generated examples. [1] Recht et al. Do ImageNet Classifiers Generalize to ImageNet?
Clarity, Quality, Novelty And Reproducibility
Clarity
The paper is mostly clear, although there are still some minor points that need to be improved:
The text of Fig. 1 is too small.
In section 3.2.1, the authors use lowercase “l” to denote label, which is easy to be confused with the number 1 and the loss “l” (which is also in lowercase).
Typo in the last paragraph of section 3.2: missing a double quote after “an image of {class label}”.
Typo in the last line of section 3.2.1: redundant word “the”.
Typo in the first sentence of the third paragraph of section 4.2: “vanillar”.
Quality
The quality of the paper has room for improvements since some parts of the paper seem to be written in a hurry. For example,
The quality of Fig. 1 is not very good.
The choice of math notations and the formatting are somewhat casual.
As an empirical paper, the experiments can be more comprehensive.
Novelty
The paper presents several novel ideas which are interesting and worth exploring. |
ICLR | Title
Oracle-oriented Robustness: Robust Image Model Evaluation with Pretrained Models as Surrogate Oracle
Abstract
Machine learning has demonstrated remarkable performances over finite datasets, yet whether the scores over the fixed benchmarks can sufficiently indicate the model’s performances in the real world is still in discussion. In reality, an ideal robust model will probably behave similarly to the oracle (e.g., the human users), thus a good evaluation protocol is probably to evaluate the models’ behaviors in comparison to the oracle. In this paper, we introduce a new robustness measurement that directly measures the image classification model’s performance compared with a surrogate oracle. Besides, we design a simple method that can accomplish the evaluation beyond the scope of the benchmarks. Our method extends the image datasets with new samples that are sufficiently perturbed to be distinct from the ones in the original sets, but are still bounded within the same causal structure the original test image represents, constrained by a surrogate oracle model pretrained with a large amount of samples. As a result, our new method will offer us a new way to evaluate the models’ robustness performances, free of limitations of fixed benchmarks or constrained perturbations, although scoped by the power of the oracle. In addition to the evaluation results, we also leverage our generated data to understand the behaviors of the model and our new evaluation strategies.
N/A
Machine learning has demonstrated remarkable performances over finite datasets, yet whether the scores over the fixed benchmarks can sufficiently indicate the model’s performances in the real world is still in discussion. In reality, an ideal robust model will probably behave similarly to the oracle (e.g., the human users), thus a good evaluation protocol is probably to evaluate the models’ behaviors in comparison to the oracle. In this paper, we introduce a new robustness measurement that directly measures the image classification model’s performance compared with a surrogate oracle. Besides, we design a simple method that can accomplish the evaluation beyond the scope of the benchmarks. Our method extends the image datasets with new samples that are sufficiently perturbed to be distinct from the ones in the original sets, but are still bounded within the same causal structure the original test image represents, constrained by a surrogate oracle model pretrained with a large amount of samples. As a result, our new method will offer us a new way to evaluate the models’ robustness performances, free of limitations of fixed benchmarks or constrained perturbations, although scoped by the power of the oracle. In addition to the evaluation results, we also leverage our generated data to understand the behaviors of the model and our new evaluation strategies.
1 INTRODUCTION
Machine learning has achieved remarkable performance over various benchmarks. For example, the recent successes of multiple pretrained models (Bommasani et al., 2021; Radford et al., 2021), with the power gained through billions of parameters and samples from the entire internet, has demonstrated human-parallel performance in understanding natural languages (Brown et al., 2020) or even arguably human-surpassing performance in understanding the connections between languages and images (Radford et al., 2021). Even within the scope of fixed benchmarks, machine learning has showed strong numerical evidence that the prediction accuracy over specific tasks can reach the position of the leaderboard as high as a human (Krizhevsky et al., 2012; He et al., 2015; Nangia & Bowman, 2019), suggesting multiple application scenarios of these methods.
However, these methods deployed in the real world often underdeliver its promises made through the benchmark datasets (Edwards, 2019; D’Amour et al., 2020), usually due to the fact that these benchmark datasets, typically i.i.d, cannot sufficiently represent the diversity of the samples a model will encounter after being deployed in practice.
Fortunately, multiple lines of study have aimed to embrace this challenge, and most of these works are proposing to further diversify the datasets used at the evaluation time. We notice these works mostly fall into two main categories: (1) the works that study the performances over testing datasets generated by predefined perturbation over the original i.i.d datasets, such as adversarial robustness (Szegedy et al., 2013; Goodfellow et al., 2015) or robustness against certain noises (Geirhos et al., 2019; Hendrycks & Dietterich, 2019; Wang et al., 2020b); and (2) the works that study the performances over testing datasets that are collected anew with a procedure/distribution different from the one for training sets, such as domain adaptation (Ben-David et al., 2007; 2010) and domain generalization (Muandet et al., 2013).
Both of these lines, while pushing the study of robustness evaluation further, mostly have their own advantages and limitations as a tradeoff on how to guarantee the underlying causal structure of evaluation samples will be the same as the training samples: perturbation based evaluations usually maintain the causal structure by predefining the perturbations to be within a set of operations that will not alter the image semantics when applied, such as ℓ-norm ball constraints (Carlini et al., 2019), or texture (Geirhos et al., 2019), frequency-based (Wang et al., 2020b) perturbations; on the other hand, new-dataset based evaluations can maintain the causal structure by soliciting the efforts to human annotators to construct datasets with the same semantics, but significantly different styles (Hendrycks et al., 2021b; Hendrycks & Dietterich, 2019; Wang et al., 2019; Gulrajani & Lopez-Paz, 2020; Koh et al., 2021; Ye et al., 2021). More details of these lines and their advantages and limitations and how our proposed evaluation protocol will contrast them will be discussed in the next section.
In this paper, we investigate how to diversify the robustness evaluation datasets to make the evaluation results credible and representative. As shown in Figure 1, we aim to integrate the advantages of the above two directions by introducing a new protocol to generate evaluation datasets that can automatically perturb the samples to be sufficiently different from existing test samples, while maintaining the underlying unknown causal structure with respect to an oracle (we use a CLIP model in this paper). Based on the new evaluation protocol, we introduce a new robustness measurement that directly measures the robustness compared with the oracle. With our proposed evaluation protocol and metric, we give a study of current robust machine learning techniques to identify the robustness gap between existing models and the oracle. This is particularly important if the goal of a research direction is to produce models that function reliably to have performance comparable to the oracle.
Therefore, our contributions in this paper are three-fold:
• We introduce a new robustness measurement that directly measures the robustness gap between models and the oracle.
• We introduce a new evaluation protocol to generate evaluation datasets that can automatically perturb the samples to be sufficiently different from existing test samples, while maintaining the underlying unknown causal structure.
• We leverage our evaluation metric and protocol to offer a study of current robustness research to identify the robustness gap between existing models and the oracle. Our findings further bring us understandings and conjectures of the behaviors of the deep learning models.
2 BACKGROUND
2.1 CURRENT ROBUSTNESS EVALUATION PROTOCOLS
The evaluation of machine learning models in non-i.i.d scenario have been studied for more than a decade, and one of the pioneers is probably domain adaptation (Ben-David et al., 2010). In
domain adaptation, the community trains the model over data from one distribution and test the model with samples from a different distribution; in domain generalization (Muandet et al., 2013), the community trains the model over data from several related distributions and test the model with samples from yet another distribution. To be more specific, a popular benchmark dataset used in domain generalization study is the PACS dataset (Li et al., 2017), which consists the images from seven labels and four different domains (photo, art, cartoon, and sketch), and the community studies the empirical performance of models when trained over three of the domains and tested over the remaining one. To facilitate the development of cross-domain robust image classification, the community has introduced several benchmarks, such as PACS (Li et al., 2017), ImageNet-A (Hendrycks et al., 2021b), ImageNet-C (Hendrycks & Dietterich, 2019), ImageNet-Sketch (Wang et al., 2019), and collective benchmarks integrating multiple datasets such as DomainBed (Gulrajani & Lopez-Paz, 2020), WILDS (Koh et al., 2021), and OOD Bench (Ye et al., 2021).
While these datasets clearly maintain the underlying causal structure of the images, a potential issue is that these evaluation datasets are fixed once collected. Thus, if the community relies on these fixed benchmarks repeatedly to rank methods, eventually the selected best method may not be a true reflection of the world, but a model that can fit certain datasets exceptionally well. This phenomenon has been discussed by several textbooks (Duda et al., 1973; Friedman et al., 2001). While recent efforts in evaluating collections of datasets (Gulrajani & Lopez-Paz, 2020; Koh et al., 2021; Ye et al., 2021) might alleviate the above potential hazards of “model selection with test set”, a dynamic process of generating evaluation datasets will certainly further mitigate this issue.
On the other hand, one can also test the robustness of models by dynamically perturbing the existing datasets. For example, one can test the model’s robustness against rotation (Marcos et al., 2016), texture (Geirhos et al., 2019), frequency-perturbed datasets (Wang et al., 2020b), or adversarial attacks (e.g., ℓp-norm constraint perturbations) (Szegedy et al., 2013). While these tests do not require additionally collected samples, these tests typically limit the perturbations to be relatively well-defined (e.g., a texture-perturbed cat image still depicts a cat because the shape of the cat is preserved during the perturbation).
While this perturbation test strategy leads to datasets dynamically generated along the evaluation, it is usually limited by the variations of the perturbations allowed. For example, one may not be able to use some significant distortion of the images in case the object depicted may be deformed and the underlying causal structure of the images are distorted. More generally speaking, most of the current perturbation-based test protocols are scoped by the tradeoff that a minor perturbation might not introduce enough variations to the existing datasets, while a significant perturbation will potentially destroy the underlying causal structures.
2.2 ASSUMED DESIDERATA OF ROBUSTNESS EVALUATION PROTOCOL
As a reflection of the previous discussion, we attempt of offer a summary list of three desired properties of the datasets serving as the benchmarks for robustness evaluation:
• Stableness in Causal Structure: the most important property of the evaluation datasets is that the samples must represent the same underlying causal structure as the one in the training samples.
• Diversity in Generated Samples: for any other non-causal factors of the data, the test samples should cover as many as possible scenarios of the images, such as texture, styles etc.
• A Dynamic Generation Process: to mitigate selection bias of the models over techniques that focus too attentively to the specification of datasets, ideally, the evaluation protocol should consist of a dynamic set of samples, preferably generated with the tested model in consideration.
Key Contribution: To the best of our knowledge, there are no other evaluation protocols of model robustness that can meet the above three properties simultaneously. Thus, we aim to introduce a method that can evaluate model’s robustness that fulfill the three above desiderata at the same time.
2.3 NECESSITY OF NEW ROBUSTNESS MEASUREMENT IN DYNAMIC EVALUATION PROTOCOL
In previous experiments, we always have two evaluation settings: the “standard” test set, and the perturbed test set. When comparing the robustness of two models, prior arts would be to rank the models by their accuracy under perturbed test set (Geirhos et al., 2019; Hendrycks et al., 2021a;
Orhan, 2019; Xie et al., 2020; Zhang, 2019) or other quantities distinct from accuracy, e.g., inception score (Salimans et al., 2016), effective robustness (Taori et al., 2020) and relative robustness (Taori et al., 2020). These metrics are good starting points for experiments since they are precisely defined and easy to apply to evaluate robustness interventions. In the dynamic evaluation protocols, however, these quantities alone cannot provide a comprehensive measure of robustness, as two models are tested on two different “dynamical” test sets. When one model outperforms the other, we cannot distinguish whether one model is actually better than the other, or if the test set happened to be easier.
The core issue in the preceding example is that we can not find the consistent robustness measurement between two different test sets. In reality, an ideal robust model will probably behave similarly to the oracle (e.g., the human users). Thus, instead of indirectly comparing models’ robustness with each other, a measurement that directly measures models’ robustness compared with the oracle is desired.
3 METHOD - COUNTERFACTUAL GENERATION WITH SURROGATE ORACLE
3.1 METHOD OVERVIEW
We use (x,y) to denote an image sample and its corresponding label, use θ(x) to denote the model we aim to evaluate, which takes an input of the image and predicts the label.
Algorithm 1 Counterfactual Image Generation with Surrogate Oracle
Input: (X,Y), θ, g, h, total number of iterations B Output: generated dataset (X̂,Y) for each (x,y) in (X,Y) do
generate x̂0 = g(x,b0) if h(x̂0) = y then
set x̂ = x̂0 for iteration bt < B do
generate x̂t = g(x̂t−1,bt) if h(x̂t) = y then
set x̂ = x̂t else
set x̂ = x̂t−1 exit FOR loop
end if end for
else set x̂ = x end if use (x̂,y) to construct (X̂,Y)
end for
We use g(x,b) to denote an image generation system, which takes an input of the starting image x to generate the another image x̂ within the computation budget b. The generation process is performed as an optimization process to maximize a scoring function α(x̂, z) that evaluate the alignment between the generated image and generation goal z guiding the perturbation process. The higher the score is, the better the alignment is. Thus, the image generation process is formalized as
x̂ = argmax x̂=g(x,b),b<B α(g(x,b), z),
where B denotes the allowed computation budget for one sample. This budget will constrain the generated image not far from the starting image so that the generated one does not converge to a trivial solution that maximizes the scoring the function.
In addition, we choose the model classification loss l(θ(x̂),y) as z. Therefore, the scoring function essentially maximizes the loss of a given image in the direction of a different class.
Finally, to maintain the unknown causal structure of the images, we leverage the power of the pretrained giant models to scope the generation process: the generated images must be considered within the same class by the pretrained model, denoted as h(x̂), which takes in the input of the image and makes a prediction.
Connecting all the components above, the generation process will aim to optimize the following:
x̂ = argmax x̂=g(x,b),b<B,z=l(θ(x̂),y) α(g(x,b), z), subject to h(x̂) = y.
Our method is generic and agnostic to the choices of the three major components, namely θ, g, and h. For example, the g component can vary from something as simple as basic transformations adding noises or rotating images to a sophisticated method to transfer the style of the images; on the other hand, the h component can vary from an approach with high reliability and low efficiency such as
actually outsourcing the annotation process to human labors to the other polarity of simply assuming a large-scale pretrained model can function plausibly as a human.
In the next part, we will introduce our concrete choices of g and h leading to the later empirical results, which build upon the recent advances of vision research.
3.2 ENGINEERING SPECIFICATION
We use VQGAN (Esser et al., 2021) as the image generation system g(x,b), and the g(x,b) is boosted by the evaluated model θ(x) serving as the α(x̂, z) to guide the generation process, where z = l(θ(x̂),y) is the model classification loss on current perturbed images.
The generation is an iterative process guided by the scoring function: at each iteration, the system add more style-wise transformations to the result of the previous iteration. Therefore, the total number of iterations allowed is denoted as the budget B (see Section 4.5 and Appendix H for details of finding the best perturbation). In practice, the value of budget B is set based on the resource concerns.
To guarantee the causal structure of images, we use a CLIP (Radford et al., 2021) model to serve as h, and design the text fragment input of CLIP to be “an image of {class}”. We directly optimize VQGAN encoder space which guided by our scoring function. We show the algorithm in Algorithm 1.
3.2.1 SPARSE SUBMODEL OF VQGAN FOR EFFICIENT PERTURBATION
While our method will function properly as described above, we notice that the generation process still have a potential limitation: the bound-free perturbation of VQGAN will sometimes perturb the semantics of the images, generating results that will be rejected by the oracle later and thus leading to a waste of computational efforts.
To counter this challenge, we use a sparse variable selection method to analyze the embedding dimensions of VQGAN to identify a subset of dimensions that are mainly responsible for the non-semantic variations.
In particular, with a dataset (X,Y) of n samples, we first use VQGAN to generate a style-transferred dataset (X′,Y). During the generation process, we preserve the latent representations of input samples after the VQGAN encoder in the original dataset. We also preserve the final latent representations before the VQGAN decoder that are quantized after the iterations in the style-transferred dataset. Then, we create a new dataset (E,L) of 2n samples, for each sample (e, l) ∈ (E,L), e is the latent representation for the sample (from either the original dataset or the style-transferred one), and l is labelled as 0 if the sample is from the original dataset and 1 if the style-transferred dataset.
Then, we train ℓ1 regularized logistic regression model to classify the samples of (E,L). With w denoting the weights of the model, we solve the following problem
argmin w ∑ (e,l)∈(E,L) l(ew, l) + λ∥w∥1,
and the sparse pattern (zeros or not) of w will inform us about which dimensions are for the style.
3.3 MEASURING ROBUSTNESS
Oracle-oriented Robustness (OOR). By design, the causal structures of counterfactual images will be maintained by the oracle. Thus, if a model has a smaller accuracy drop on the counterfactual images, it means that the model makes more similar predictions to oracle compared to a different model. To precisely define OOR, we introduce counterfactual accuracy (CA), the accuracy on the counterfactual images that our generative model successfully produces. As SA may influence CA to some extent, to disentangle CA from SA, we normalize CA with SA as OOR:
OOR = CA
SA ×100%
In settings where the oracle is human labors, OOR measures the robustness difference between the evaluated model and human perception. In our experiment setting, OOR measures the robustness difference between models trained on fixed datasets (the evaluated model) and the model trained on unfiltered, highly varied, and highly noisy data (the oracle CLIP model).
3.4 THE NECESSITY OF THE SURROGATE ORACLE
At last, we devote a short paragraph to reminder some readers that, despite the alluring idea of designing systems that forgo the usages of underlying causal structure or oracle, it has been proved or argued multiple times that it is impossible to create that knowledge with nothing but data, in either context of machine learning (Locatello et al., 2019; Mahajan et al., 2019; Wang et al., 2021) or causality (Bareinboim et al., 2020; Xia et al., 2021),(Pearl, 2009, Sec. 1.4).
4 EXPERIMENTS - EVALUATION AND UNDERSTANDING OF MODELS
4.1 EXPERIMENT SETUP
We consider four different scenarios, ranging from the basic benchmark MNIST (LeCun et al., 1998), through CIFAR10 (Krizhevsky et al., 2009), 9-class ImageNet (Santurkar et al., 2019), to full-fledged 1000-class ImageNet (Deng et al., 2009). For ImageNet, we resize all images to 224× 224 px. We also center and re-scale the color values with µRGB = [0.485, 0.456, 0.406] and σ = [0.229, 0.224, 0.225]. The total number of iterations allowed (computation budget B) of our evaluation protocol is set as 10. We conduct the experiments on a NVIDIA GeForce RTX 3090 GPU.
For each of the experiment, we report a set of four results:
• Standard Accuracy (SA): reported for references.
• Validation Rate (VR): the percentage of images validated by the oracle that maintains the causal structure.
• Oracle-oriented Robustness (OOR): the robustness of the model compared with the oracle.
4.2 ROBUSTNESS EVALUATION FOR STANDARD VISION MODELS
We consider a large range of models (Appedix J) and evaluate pre-trained variants of a LeNet architecture (LeCun et al., 1998) for the MNIST experiment and ResNet architecture (He et al., 2016a) for the remaining experiments. For ImageNet experiment, we also consider pretrained transformer variants of ViT (Dosovitskiy et al., 2020), Swin (Liu et al., 2021), Twins (Chu et al., 2021), Visformer (Chen et al., 2021) and DeiT (Touvron et al., 2021) from the timm library (Wightman, 2019). We evaluate the most recent ConvNeXt (Liu et al., 2022) as well. All models are trained on the ILSVRC2012 subset of IN comprised of 1.2 million images in the training and a total of 1000 classes (Deng et al., 2009; Russakovsky et al., 2015).
We report our results in Table 1. As expected, these models can barely maintain its performances when tested on data from different distributions, as shown by many previous works (e.g., Geirhos et al., 2019; Hendrycks & Dietterich, 2019; Wang et al., 2020b).
Interestingly, on ImageNet, though both transformer-variants models and vanilla CNNarchitecture model, i.e., ResNet, attain similar clean image accuracy, transformer-variants substantially outperforms ResNet50 in terms of OOR under our dynamic evaluation protocol. We conjecture such performance gap is partly originated from the differences in training setups; more specifically, it may be resulted by the fact transformer-variants by default use strong data augmentation strategies while ResNet50 use none of them. The augmentation strategies (e.g., Mixup (Zhang et al., 2017), Cutmix (Yun
et al., 2019) and Random Erasing (Zhong et al., 2020), etc.) already naively introduce out-ofdistribution (OOD) samples during training, therefore are potentially helpful for securing model
robustness towards data shifts. When equiping with the similar data augmentation strategies, CNNarchitecture model, i.e., ConvNext, has achieved comparable performance in terms of OOR. This hypothesis has also been verified in recent works (Bai et al., 2021; Wang et al., 2022). We will offer more discussions on the robustness enhancing methods in Section 4.3.
Besides comparing performance between different standard models, OOR brings us the chance to directly compare models with the oracle. Across all of our experiments, the OOR shows the significant gap between models and the oracle, which is trained on the unfiltered and highly varied data, seemingly suggesting that training with a more diverse dataset would help with robustness. This overarching trend has also been identified in (Taori et al., 2020). However, quantifying when and why training with more data helps is still an interesting open question.
We also notice that the VR tends to be different for different datasets. We conjecture this is due to how the oracle model understands the images and labels, more discussions is offered in Section 5.
4.3 ROBUSTNESS EVALUATION FOR ROBUST VISION MODELS
Recently, some techniques have been introduced to cope with corruptions or style shifts. For example, by adapting the batch normalization statistics with a limited number of samples (Schneider et al., 2020), the performance on stylized images (or corrupted images) can be significantly increased. Additionally, some more sophisticated techniques, e.g., AugMix (Hendrycks et al., 2019), have also been widely employed by the community.
To investigate whether those OOD robust models can still maintain the performance under our dynamic evaluation protocol, we evaluate the pretrained ResNet50 models combining with the four leading methods from the ImageNet-C leaderboard, namely Stylized ImageNet training (SIN; (Geirhos et al., 2019)), adversarial noise training (ANT; (Rusak et al.)) as well as a combination of ANT and SIN (ANT+SIN; (Rusak et al.)), optimized data augmentation using Augmix (AugMix; (Hendrycks et al., 2019)), DeepAugment (DeepAug; (Hendrycks et al., 2021a)) and a combination of Augmix and DeepAugment (DeepAug+AM; (Hendrycks et al., 2021a)).
The results are displayed in Table 2. Surprisingly, we find that some common corruption robust models, i.e., SIN, ANT, ANT+SIN, fail to maintain their power under our dynamic evaluation protocol. We take the SIN method as an example. The OOR of SIN method is 42.92, which is even lower than that of a vanilla ResNet50. As these methods are well fitted in the benchmark ImageNet-C, such results verify the weakness of relying on fixed benchmarks to rank methods. The selected best method may not be a true reflection of the real world, but a model well fit certain datasets, which in turn proves the necessity of our dynamic evaluation protocol.
DeepAug, Augmix and DeepAug+AM perform better than SIN and ANT methods in terms of OOR as they are dynamically perturbing the datasets, which alleviates the hazards of “model selection with test set” to some extent. However, their performance is limited by the variations of the perturbations allowed, resulting in only a marginal improvement compared with the ResNet50 under our evaluation protocol.
In addition, we also visualize the counterfactual images generated according to the evaluated styleshift robust models in Figure 2. More results are shown in Appendix L. Specifically, we have the following observations:
Preservation of local textual details. A number of recent empirical findings point to an important role of object textures for CNN, where object textures are more important than global object shapes for CNN model to learn (Gatys et al., 2015; Ballester & Araujo, 2016; Gatys et al., 2017; Brendel & Bethge, 2019; Geirhos et al., 2019; Wang et al., 2020b). We notice our generated counterfactual images may preserve false local textual details, the evaluation task will become much harder since textures are no longer predictive, but instead a nuisance factor (as desired). For the counterfactual
image generated for the DeepAug method (Figure 2f), we produce a skin texture similar to chicken skin, and the fish head becomes more and more chicken-like. ResNet with DeepAug method is misled by this corruption.
Generalization to shape perturbations. Moreover, since our attack intensity could be dynamically altered based on the model’s gradient while still maintaining the causal structures, the perturbation we produce would be sufficiently that not just limited to object textures, but even be a certain degree of shape perturbation. As it is acknowledged that networks with a higher shape bias are inherently more robust to many different image distortions and reach higher performance on classification and classification tasks, we observe that the counterfactual image generated for SIN (Figure 2b and Figure 2i) and ANT+SIN (Figure 2d and Figure 2k) methods are shape-perturbed and successfully attack the models.
Recognition of model properties. With the combination of different methods, the counterfactual images generated would be more comprehensive. For example, the counterfactual image generated for DeepAug+AM (Figure 2g) would preserve the chicken-like head of DeepAug’s and skin patterns of Augmix’s. As our evaluation method does not memorize the model it evaluated, this result reveals that our method could recognize the model properties, and automatically generate those hard counterfactual images to complete the evaluation.
Overall, these visualizations reveal that our dynamic evaluation protocol dynamically adjusts attack strategies based on different model properties, and automatically generates diversified counterfactual images that complements static benchmark, i.e., ImageNet-C, to expose weaknesses for models.
4.4 UNDERSTANDING THE PROPERTIES OF OUR EVALUATION SYSTEM
We continue to investigate several properties of the models in the next couple sections. To save space, we will mainly present the results on CIFAR10 experiment here and save the details to the appendix:
• In Appendix B, we explored the transferability of the generated images. The results of a reasonable transferability suggests that our method of generating images can be potentially used in a broader scope: we can also leverage the method to generate a static set of images and set a benchmark dataset to help the development of robustness methods.
• In Appendix C, we test whether initiating the perturbation process with an adversarial example will further degrade the OOR. We find that initiaing with the FGSM adversarial examples (Goodfellow et al., 2015) barely affect the OOR.
• In Appendix D, we compare the vanilla model to a model trained by PGD (Madry et al., 2017). We find that the adversarially trained model and vanillaly trained model process the data differently. However, their robustness weak spots are exposed to a similar degree by our test system.
• In Appendix E, we explored the possibility of improving the evaluated robustness by augmenting the images with the images generated by our evaluation system. However, due to the required
computational load, we only use a static set of generated images to train the model and the results suggest that static set of images for augmentation cannot sufficiently robustify the model to our evaluation system.
• We also notice that the generated images tend to shift the color of the original images, so we tested the robustness of grayscale models in Appendix F, the results suggest removing the color channel will not improve robustness performances.
4.5 EXPERIMENTS REGARDING METHOD CONFIGURATION
Generator Configuration. We conduct ablation study on the generator choice to agree on the performance ranking in Table 1 and Table 2. We consider several image generator architechitures, namely, variational autoencoder (VAE) (Kingma & Welling, 2013; Rezende et al., 2014) like Efficient-VDVAE (Hazami et al., 2022), diffusion models (Sohl-Dickstein et al., 2015) like Improved DDPM (Nichol & Dhariwal, 2021) and ADM (Dhariwal & Nichol, 2021), and GAN like StyleGANXL (Sauer et al., 2022). As shown in Table 3, the validation rate of the oracle stays stable across all the image generators. We find that the conclusion is consistent under different generator choices, which validates the correctness of our conclusions in Section 4.2 and Section 4.3.
Sparse VQGAN. In experiments of sparse VQGAN, we find that only 0.69% dimensions are highly correlated to the style. Therefore, we mask the rest 99.31% dimensions to create a sparse submodel of VQGAN for efficient perturbation. The running time can be reduced by 12.7% on 9-class ImageNet and 28.5% on ImageNet, respectively. Details can be found in Appendix G.
Step size. We experiment on the perturbation step size to find the best perturbation under the computation budget B. We find that too small or large step size lead to slight perturbation strength while stronger image perturbation could be generated when the step size stays in a mild range, i.e., 0.1 and 0.2. Details of our experiments on step size can be found in Appendix H.
5 DISCUSSION AND CONCLUSION
Potential limitation. We notice that, the CLIP model has been influenced by the imbalance sample distributions across the internet. We provide the details of test on 9-class ImageNet for vanilla ResNet-18 in Appendix I. We observe that the oracle model can tolerate a much more significant perturbation over samples labelled as Dog (VR 0.95) or Cat (VR 0.94) than samples labelled as Primate (VR 0.48). The OOR value for Primate images are much higher than other categories, creating an illusion that the evaluated models are robust against perturbed Primate images. However, such an illusion is caused by the limitation of the pretrained models that the oracle could only handle slightly perturbed samples.
The usage of oracle. Is it cheating to use oracle? The answer might depend on perspectives, but we hope to remind some readers that, in general, it is impossible to maintain the underlying causal structure during perturbation without prior knowledge (Locatello et al., 2019; Mahajan et al., 2019; Wang et al., 2021; Bareinboim et al., 2020; Xia et al., 2021),(Pearl, 2009, Sec. 1.4).
Conclusion. To conclude, in this paper, we first summarized the common practices of model evaluation strategies for robust vision machine learning. We then discussed three desiderata of the robustness evaluation protocol. Further, we offered a simple method that can fulfill these three desiderata at the same time, serving the purpose of evaluating vision models’ robustness across generic i.i.d benchmarks, without requirement on the prior knowledge of the underlying causal structure depicted by the images, although relying on a plausible oracle.
ETHICS STATEMENT
The primary goal of this paper is to introduce a new evaluation protocol for vision machine learning research that can generate sufficiently perturbed samples from the original samples while maintaining the causal structures by assuming an oracle. Thus, we can introduce significant variations of the existing data while being free from additional human efforts. With our approach, we hope to renew the benchmarks for current robustness evaluation, offer understandings of the behaviors of deep vision models and potentially facilitate the generation of more truly robust models. Increasing the robustness of vision models can enhance their reliability and safety, which leads to the trustworthy artificial intelligence and contributes to a wide range of application scenarios (e.g., manufacturing automation, surveillance systems, etc.). Manufacturing automation can improve the production efficiency, but may also trigger social issues related to job looses and industrial restructuring. Advanced surveillance systems are conducive to improving social security, but may also raise public concerns about personal privacy violations.
We encourage further work to understand the limitations of machine vision models in OOD settings. More robust models carry the potential risk of automation bias, i.e., an undue trust in vision models. However, even if models are robust against corruptions in finite OOD datasets, they might still quickly fail on the massive generic perturbations existing in the real-world data space, i.e., the perturbations offered by our approach. Understanding under what conditions model decisions can be deemed reliable or not is still an open research question that deserves further attention.
REPRODUCIBILITY STATEMENT
Please refer to Appendix J for the references of all models we evaluated and links to the corresponding source code.
A NOTES ON THE EXPERIMENTAL SETUP
A.1 NOTES ON MODELS
Note that we only re-evaluate existing model checkpoints, and hence do not perform any hyperparameter tuning for evaluated models. Since it is possible to work with a small amount of GPU resources, our model evaluations are done on a single NVIDIA GeForce RTX 3090 GPU.
A.2 HYPERPARAMETER TUNING
Our method is generally parameter-free except for the computation budget and perturbation step size. In our experiments, the computation budget is the maximum iteration number of Sparse VQGAN. We consider the predefined value to be 10, as it guarantees the degree of perturbation with acceptable time costs. We provide the experiment for step size configuration in Section 4.5.
B TRANSFERABILITY OF GENERATED IMAGES
We first study whether our generated images are model specific, since the generation of the images involves the gradient of the original model. We train several architectures, namely EfficientNet (Tan & Le, 2019), MobileNet (Howard et al., 2017), SimpleDLA (Yu et al., 2018), VGG19 (Simonyan & Zisserman, 2014), PreActResNet (He et al., 2016b), GoogLeNet (Szegedy et al., 2015), and DenseNet121 (Huang et al., 2017) and test these models with the images. We also train another ResNet following the same procedure to check the transferability across different runs in one architecture.
Table 4: Performances of transferability.
Model SA OOR ResNet 95.38 54.17 EfficientNet 91.37 68.48 MobileNet 91.63 68.72 SimpleDLA 92.25 66.16 VGG 93.54 70.57 PreActResNet 94.06 67.25 ResNet 94.67 66.23
GoogLeNet 95.06 66.68 DenseNet 95.26 66.43
Table 4 shows a reasonable transferability of the generated images as the OOR are all lower than the SA, although we can also observe an improvement over the OOR when tested in the new models. These results suggest that our method of generating images can be potentially used in a broader scope: we can also leverage the method to generate a static set of images and set a benchmark dataset to help the development of robustness methods.
In addition, our results might potentially help mitigate a debate on whether more accurate architectures are naturally more robust: on one hand, we have results showing that more accurate architectures indeed lead to better empirical performances on certain (usually fixed) robustness benchmarks (Rozsa et al., 2016; Hendrycks & Dietterich, 2019); while on the other hand, some counterpoints suggest the higher robustness numerical performances are only because these models capture more non-robust features that also happen exist in the fixed benchmarks (Tsipras et al., 2018; Wang et al., 2020b; Taori et al., 2020). Table 4 show some examples to support the latter argument: in particular, we notice that VGG, while ranked in the middle of the accuracy ladder, interestingly stands out when tested with generated images. These results continue to support our argument that a dynamic robustness test scenario can help reveal more properties of the model.
C INITIATING WITH ADVERSARIAL ATTACKED IMAGES
Since our method using the gradient of the evaluated model reminds readers about the gradient-based attack methods in adversarial robustness literature, we test whether initiating the perturbation process with an adversarial example will further degrade the accuracy.
We first generate the images with FGSM attack (Goodfellow et al., 2015). Table 5. shows that initiating with the FGSM adversarial examples barely affect the OOR, which
is probably because the major style-wise perturbation will erase the imperceptible perturbations the adversarial examples introduce.
D ADVERSARIALLY ROBUST MODELS
With evidence suggesting the adversarially robust models are considered more human perceptually aligned (Engstrom et al., 2019; Zhang & Zhu, 2019; Wang et al., 2020b), we compare the vanilla model to a model trained by PGD (Madry et al., 2017) (ℓ∞ norm smaller than 0.03).
Table 6: Performances comparison with vanilla model and PGD trained model.
Data Model SA OOR Van. Van. 95.38 57.79PGD 85.70 95.96 PGD Van. 95.38 81.73PGD 85.70 66.18
As shown in Table 6, adversarially trained model and vanillaly trained model indeed process the data differently: the transferability of the generated images between these two regimes can barely hold. In particular, the PGD model can almost maintain its performances when tested with the images generated by the vanilla model.
However, despite the differences, the PGD model’s robustness weak spots are exposed to a similar degree with the vanilla model by our test system: the OOR of the vanilla model and the PGD model are only 57.79 and 66.18, respectively. We believe this result can further help advocate our belief that the robustness test needs to be a dynamic process generating images conditioning on the model to test, and thus further help validate the importance of our contribution.
E AUGMENTATION THROUGH STATIC ADVERSARIAL TRAINING
Intuitively, inspired by the success of adversarial training (Madry et al., 2017) in defending models against adversarial attacks, a natural method to improve the empirical performances under our new test protocol is to augment the training data with counterfactual training images generated by the same process. We aim to validate the effectiveness of this method here.
However, the computational load of generation process is not ideal to serve the standard adversarial training strategy, and we can only have one copy of the counterfactual training samples. Fortunately, we notice that some recent advances in training with data augmentation can help learn robust representations with a limited scope of augmented samples (Wang et al., 2020a), which we use here.
We report our results in Table 7. The first thing we observe is that the model trained with the augmentation data offered through our approach could preserve a relatively higher performance (OOR 89.10) when testing with the counterfactual images generated according to the vanilla model. Since we have shown the counterfactual samples have a reasonable transferability in the main manuscript, this result indicates the robustness we brought when training with the counterfactual images generated by our approach.
In addition, when tested with the counterfactual images generated according to the augmented model, both models’ performance would drop significantly, which again indicates the effectiveness of our approach.
F GRAYSCALE MODELS
Our previous visualization suggests that a shortcut the counterfactual generation system can take is to significantly shift the color of the images, for which a grey-scale model should easily maintain the performance. Thus, we train a grayscale model by changing the ResNet input channel to be 1 and
transforming the input images to be grayscale upon feeding into the model. We report the results in Table 8.
Interestingly, we notice that the grayscale model cannot defend against the shift introduced by our system by ignoring the color information. On the contrary, it seems to encourage our system to generate more counterfactual images that can lower the performances.
In addition, we visualize some counterfactual images generated according to each model and show them in Figure 3. We can see some evidence that the graycale model forces the generation system to focus more on the shape of the object and less of the color of the images. We find it particularly interesting that our system sometimes generates different images differently for different models while the resulting images deceive the respective model to make the same prediction.
G EXPERIMENTS TO SUPPORT SPARSE VQGAN
We generate the flattened latent representations of input images after the VQGAN Encoder with negative labels. Following Algorithm 1, we generate the flattened final latent representations before the VQGAN decoder with positive labels. Altogether, we form a binary classification dataset where the number of positive and negative samples is balanced. The positive samples are the latent representations of counterfactual images while the negative samples are the latent representations of input images. We set the split ratio of train and test set to be 0.8 : 0.2. We perform the explorations on various datasets, i.e. MNIST, CIFAR-10, 9-class ImageNet and ImageNet.
The classification model we consider is LASSO1 as it enables automatically feature selection with strong interpretability. We set the regularization strength to be 36.36. We adopt saga (Defazio et al., 2014) as the solver to use in the optimization process. The classification results are shown in Table 9.
1Although LASSO is originally a regression model, we probabilize the regression values to get the final classification results.
We observe that the coefficient matrix of features can be far sparser than we expect. We take the result of 9-class ImageNet as an example. Surprisingly, we find that almost 99.31% dimensions in average could be discarded when making judgements. We argue the preserved 0.69% dimensions are highly correlated to VQGAN perturbation. Therefore, we keep the corresponding 99.31% dimensions unchanged and only let the rest 0.69% dimensions participate in computation. Our computation loads could be significantly reduced while still maintain the competitive performance compared with the unmasked version2.
We conduct the run-time experiments on a single NVIDIA GeForce RTX 3090 GPU. Following our experiment setting, we evaluate a vanilla ResNet-18 on 9-class ImageNet and a vanilla ResNet-50 on ImageNet. As shown in Table 10, the run-time on ImageNet can be reduced by 28.5% with our sparse VQGAN. Compared with large-scale masked dimensions (i.e., 99.31%), we attribute the relatively incremental run-time improvement (i.e., 12.7% on 9-class ImageNet, 28.5% on ImageNet) to the fact that we have to perform mask and unmask operations each time when calculating the model gradient, which offsets the calculation efficiency brought by the sparse VQGAN to a certain extent.
H PARAMETER STUDY ON STEP SIZE
We conduct the parameter study of the perturbation step size for our evaluation system on the CIFAR10 dataset. Specifically, we tune the step size in {0.01, 0.05, 0.1, 0,2, 0.5}. The maximum iteration (computation budget B) is set to be 10. All results are produced based on the ResNet18 and averaged over five runs.
As shown in Figure 4, we observe that when the step size is too small, i.e., 0.01 and 0.05, the strength of perturbation cannot be achieved within the predefined maximum iterations, resulting in the higher score of OOR. In addition, large step size will also lead to higher OOR score. When the step size is large, i.e., 0.5, the perturbation is likely to stop after only a few iterations. This could also lead to small perturbation strength compared with the scenario where we use relatively small step size but more iterations. When the step size is 0.01, the model seems achieves the oracle-parallel performance (OOR 99.66). However, such OOR values would become meaningless due to the small perturbation strength. Moreover, when the step size stays in a mild range, i.e., 0.1 and 0.2, stronger image perturbation could be generated, while the performances at this
range stay constant. Therefore, we choose the step size of 0.1 for the experiments.
2We note that the overlapping degree of the preserved dimensions for each dataset is not high, which means that we need to specify these dimensions when facing new datasets.
I ANALYSIS OF SAMPLES THAT ARE MISCLASSIFIED BY THE MODEL
We present the results on 9-class ImageNet experiment to show the details for each category.
Table 11 shows that the VR values for most categories are still higher than 80%, some even reach 95%, which means we produce sufficient number of counterfactual images. However, we notice that the VR value for primate images is quite lower compared with other categories, indicating around 52% perturbed primate images are blocked by the orcle. We have discussed this category unbalance issue in Section 5.
As shown in Table 11, the OOR value for each category significantly drops compared with the SA value, indicating the weakness of trained models. An interesting finding is that the OOR value for Primate images are quite higher than other categories, given the fact that more perturbed Primate images are blocked by the oracle. We attribute it to the limitation of foundation models. As the CLIP model has been influenced by the imbalance sample distributions across the Internet, it could only handle easy perturbed samples well. Therefore, the counterfactual images preserved would be those that can be easily classified by the models.
J LIST OF EVALUATED MODELS
The following lists contains all models we evaluated on various datasets with references and links to the corresponding source code.
J.1 PRETRAINED VQGAN MODEL
We use the checkpoint of vqgan_imagenet_f16_16384 from https://heibox. uni-heidelberg.de/d/a7530b09fed84f80a887/
J.2 PRETRAINED CLIP MODEL
Model weights of ViT-B/32 and usage code are taken from https://github.com/openai/ CLIP
J.3 TIMM MODELS TRAINED ON IMAGENET (WIGHTMAN, 2019)
Weights are taken from https://github.com/rwightman/pytorch-image-models/ tree/master/timm/models
1. ResNet50 (He et al., 2016a) 2. ViT (Dosovitskiy et al., 2020) 3. DeiT (Touvron et al., 2021) 4. Twins (Chu et al., 2021)
5. Visformer (Chen et al., 2021) 6. Swin (Liu et al., 2021) 7. ConvNeXt (Liu et al., 2022)
J.4 ROBUST RESNET50 MODELS
1. ResNet50 SIN+IN (Geirhos et al., 2019) https://github.com/rgeirhos/ texture-vs-shape
2. ResNet50 ANT (Rusak et al.) https://github.com/bethgelab/ game-of-noise
3. ResNet50 ANT+SIN (Rusak et al.) https://github.com/bethgelab/ game-of-noise
4. ResNet50 Augmix (Hendrycks et al., 2019) https://github.com/ google-research/augmix
5. ResNet50 DeepAugment (Hendrycks et al., 2021a) https://github.com/ hendrycks/imagenet-r
6. ResNet50 DeepAugment+Augmix (Hendrycks et al., 2021a) https://github.com/ hendrycks/imagenet-r
J.5 ADDITIONAL IMAGE GENERATORS
1. Efficient-VDVAE (Hazami et al., 2022) https://github.com/Rayhane-mamah/ Efficient-VDVAE
2. Improved DDPM (Nichol & Dhariwal, 2021) https://github.com/open-mmlab/ mmgeneration/tree/master/configs/improved_ddpm
3. ADM (Dhariwal & Nichol, 2021) https://github.com/openai/ guided-diffusion
4. StyleGAN (Sauer et al., 2022) https://github.com/autonomousvision/ stylegan_xl
K LEADERBOARDS FOR ROBUST IMAGE MODEL
We launch leaderboards for robust image models. The goal of these leaderboards are as follows:
• To keep on track of state-of-the-art on each adversarial vision task and new model architectures with our dynamic evaluation process.
• To see the comparison of robust vision models at a glance (e.g., performance, speed, size, etc.). • To access their research papers and implementations on different frameworks.
We offer a sample of the robust ImageNet classification leaderboard in supplementary materials.
L ADDITIONAL COUNTERFACTUAL IMAGE SAMPLES
In Figure 5, we provide additional counterfactual images generated according to each model. We have similar observations to Section 4.3. First, the generated counterfactual images exhibit diversity that many other non-causal factors of the data would be covered, i.e., texture, shape and styles. Second, our method could recognize the model properties, and automatically generate those hard counterfactual images to complete the evaluation.
In addition, the generated images show a reasonable transferability in Table 4, indicating tha our method can be potentially used in a broader scope: we can also leverage the method to generate a static set of images and set a benchmark dataset to help the development of robustness methods. Therefore, we also offer two static benchmarks in supplementary materials that are generated based on CNN architecture, i.e., ConvNext and transformer variant, i.e., ViT, respectively. | 1. What is the focus of the paper regarding semantic correspondence?
2. What are the strengths and weaknesses of the proposed approach in generating adversarial samples?
3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
4. Do you have any concerns or questions about the paper, particularly regarding its contributions and limitations? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper proposes an adversarial attack benchmark where the attacked samples are generated by high-quality generative models filtered by a "surrogated oracle" (a model trained with large-scale extra data points, such as CLIP). More specifically, the proposed method generates adversarial samples by maximizing classification loss (i.e., cross-entropy loss) using VQGAN. The generated image is validated by the CLIP model, and if the CLIP model predicts the generated image with a wrong label, then the generated image is set to the original image. With the generated adversarial samples, this paper shows that the existing vision models are not robust to the proposed attack method.
Strengths And Weaknesses
Strength
This paper tackles a very important problem in the robustness benchmark: we need a dynamic benchmark (based on optimization) rather than a manually collected dataset. Also, the generated dataset should be realistic (not based on l2 or l-infinity ball).
This paper proposes a sparse submodel of VQGAN for reducing the computational cost
Weakness
I feel this paper has a limited novelty. The main modules are from the other works (VQGAN, CLIP). The idea of "adversarial attack by generative model" is an old and popular idea [R1-5]. The novel part of this paper could be (1) using a novel generative model rather than GAN, and (2) filtering based on the CLIP model, but I think these contributions are very limited. Furthermore, using VQGAN and CLIP models could be problematic as my next comment.
[R1] Song, Yang, et al. "Constructing unrestricted adversarial examples with generative models." Advances in Neural Information Processing Systems 31 (2018).
[R2] Xiao, Chaowei, et al. "Generating adversarial examples with adversarial networks." arXiv preprint arXiv:1801.02610 (2018).
[R3] Poursaeed, Omid, et al. "Generative adversarial perturbations." Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2018.
[R4] Qiu, Haonan, et al. "Semanticadv: Generating adversarial examples via attribute-conditioned image editing." European Conference on Computer Vision. Springer, Cham, 2020.
[R5] Jang, Yunseok, et al. "Adversarial defense via learning to generate diverse attacks." Proceedings of the IEEE/CVF International Conference on Computer Vision. 2019.
I feel that the quality of the proposed generation process is still limited. Furthermore, I feel that the proposed method is highly biased toward VQGAN and CLIP, leading to biased evaluation results.
The proposed method highly depends on extra modules, such as VQGAN and CLIP. Namely, the quality or diversity of the generated images will be bounded by the VQGAN performance. Similarly, the quality of the generated images is bounded by the CLIP zero-shot generalizability to the generated images; if CLIP zero-shot performs worse for specific types of perturbation, then the proposed framework cannot cover such perturbation. As shown in the "Potential limitation" paragraph, the CLIP zero-shot performance is not consistent across the classes. It could cause a biased benchmark toward the selected models (i.e., CLIP).
As shown in Table 1-2, only about one-third of images are changed by VQGAN for ImageNet. It means that VQGAN and CLIP models cannot generate a proper image, or classify the generated images correctly. I think the quality of the proposed generation process is still not effective for measuring robustness.
Not only the generated samples will be biased toward CLIP, but also the metric will be biased toward CLIP as well. Because the generated samples are biased to the CLIP zero-shot performance, "CA" score will also be biased.
Also, this benchmark is only available when a strong and generalizable generative model and a strong and generalizable "surrogated oracle" models are available. Thus, this benchmark is only limited to natural image benchmarks, such as ImageNet. This is not a strong weakness, but it is worth to be discussed.
Questions and minor comments
There is no detailed explanation of how VQGAN is guided by classification loss. I presume that the guidance for VQGAN is done by directly optimizing VQGAN encoder space (as the CLIP-guided VQGAN), but it would be great if the authors will provide more details for this.
I feel that the concepts of "counterfactual" and "confounder" are used incorrectly in Section 3.3.
What does "counterfactual" mean here? For example, in the field of causality, the terminology "counterfactual explanation" describes the causal relationship by "If X has not occurred then Y does not occur". Similarly, in machine learning, a counterfactual example means a modified example with the minimum changes (to see what affects the prediction at most). However, the terminology "counterfactual accuracy" is weird. How accuracy becomes "counterfactual"?
Why the standard accuracy acts as a confounder of "counterfactual accuracy"? How standard accuracy and counterfactual accuracy are related in terms of causality? I cannot find any relationship between them, as well as, it is not trivial to treat "accuracies" as random variables. Also, to define a confounder (as far as the reviewer understood) we need to define "Do operation". What is "do operation" for accuracy scores?
The formulation for
x
^
seems weird. I recommend avoiding using the same notation for the optimized results (
x
^
in LHS) variable (
x
^
in RHS, below
arg
max
).
Some citations are missing. For example, Section 4.2. missed citations for Mixup, CutMix, and random erasing (Cutout and RandAug are worth being cited as well).
Clarity, Quality, Novelty And Reproducibility
I feel that the quality of this paper has a large room for improvement. For example, this paper borrows some terminologies from the field of causality (e.g., "causal structure", "counterfactual", "confounder"), but they are not well-defined in this paper. Also, I feel this paper is not self-contained. For example, there is no detailed explanation of how the generative model is guided by classification loss. I also recommend making Section 4.4. self-contained in the main text.
In terms of novelty and originality, I think this paper has limited novelty and contribution as my previous comment. |
ICLR | Title
Oracle-oriented Robustness: Robust Image Model Evaluation with Pretrained Models as Surrogate Oracle
Abstract
Machine learning has demonstrated remarkable performances over finite datasets, yet whether the scores over the fixed benchmarks can sufficiently indicate the model’s performances in the real world is still in discussion. In reality, an ideal robust model will probably behave similarly to the oracle (e.g., the human users), thus a good evaluation protocol is probably to evaluate the models’ behaviors in comparison to the oracle. In this paper, we introduce a new robustness measurement that directly measures the image classification model’s performance compared with a surrogate oracle. Besides, we design a simple method that can accomplish the evaluation beyond the scope of the benchmarks. Our method extends the image datasets with new samples that are sufficiently perturbed to be distinct from the ones in the original sets, but are still bounded within the same causal structure the original test image represents, constrained by a surrogate oracle model pretrained with a large amount of samples. As a result, our new method will offer us a new way to evaluate the models’ robustness performances, free of limitations of fixed benchmarks or constrained perturbations, although scoped by the power of the oracle. In addition to the evaluation results, we also leverage our generated data to understand the behaviors of the model and our new evaluation strategies.
N/A
Machine learning has demonstrated remarkable performances over finite datasets, yet whether the scores over the fixed benchmarks can sufficiently indicate the model’s performances in the real world is still in discussion. In reality, an ideal robust model will probably behave similarly to the oracle (e.g., the human users), thus a good evaluation protocol is probably to evaluate the models’ behaviors in comparison to the oracle. In this paper, we introduce a new robustness measurement that directly measures the image classification model’s performance compared with a surrogate oracle. Besides, we design a simple method that can accomplish the evaluation beyond the scope of the benchmarks. Our method extends the image datasets with new samples that are sufficiently perturbed to be distinct from the ones in the original sets, but are still bounded within the same causal structure the original test image represents, constrained by a surrogate oracle model pretrained with a large amount of samples. As a result, our new method will offer us a new way to evaluate the models’ robustness performances, free of limitations of fixed benchmarks or constrained perturbations, although scoped by the power of the oracle. In addition to the evaluation results, we also leverage our generated data to understand the behaviors of the model and our new evaluation strategies.
1 INTRODUCTION
Machine learning has achieved remarkable performance over various benchmarks. For example, the recent successes of multiple pretrained models (Bommasani et al., 2021; Radford et al., 2021), with the power gained through billions of parameters and samples from the entire internet, has demonstrated human-parallel performance in understanding natural languages (Brown et al., 2020) or even arguably human-surpassing performance in understanding the connections between languages and images (Radford et al., 2021). Even within the scope of fixed benchmarks, machine learning has showed strong numerical evidence that the prediction accuracy over specific tasks can reach the position of the leaderboard as high as a human (Krizhevsky et al., 2012; He et al., 2015; Nangia & Bowman, 2019), suggesting multiple application scenarios of these methods.
However, these methods deployed in the real world often underdeliver its promises made through the benchmark datasets (Edwards, 2019; D’Amour et al., 2020), usually due to the fact that these benchmark datasets, typically i.i.d, cannot sufficiently represent the diversity of the samples a model will encounter after being deployed in practice.
Fortunately, multiple lines of study have aimed to embrace this challenge, and most of these works are proposing to further diversify the datasets used at the evaluation time. We notice these works mostly fall into two main categories: (1) the works that study the performances over testing datasets generated by predefined perturbation over the original i.i.d datasets, such as adversarial robustness (Szegedy et al., 2013; Goodfellow et al., 2015) or robustness against certain noises (Geirhos et al., 2019; Hendrycks & Dietterich, 2019; Wang et al., 2020b); and (2) the works that study the performances over testing datasets that are collected anew with a procedure/distribution different from the one for training sets, such as domain adaptation (Ben-David et al., 2007; 2010) and domain generalization (Muandet et al., 2013).
Both of these lines, while pushing the study of robustness evaluation further, mostly have their own advantages and limitations as a tradeoff on how to guarantee the underlying causal structure of evaluation samples will be the same as the training samples: perturbation based evaluations usually maintain the causal structure by predefining the perturbations to be within a set of operations that will not alter the image semantics when applied, such as ℓ-norm ball constraints (Carlini et al., 2019), or texture (Geirhos et al., 2019), frequency-based (Wang et al., 2020b) perturbations; on the other hand, new-dataset based evaluations can maintain the causal structure by soliciting the efforts to human annotators to construct datasets with the same semantics, but significantly different styles (Hendrycks et al., 2021b; Hendrycks & Dietterich, 2019; Wang et al., 2019; Gulrajani & Lopez-Paz, 2020; Koh et al., 2021; Ye et al., 2021). More details of these lines and their advantages and limitations and how our proposed evaluation protocol will contrast them will be discussed in the next section.
In this paper, we investigate how to diversify the robustness evaluation datasets to make the evaluation results credible and representative. As shown in Figure 1, we aim to integrate the advantages of the above two directions by introducing a new protocol to generate evaluation datasets that can automatically perturb the samples to be sufficiently different from existing test samples, while maintaining the underlying unknown causal structure with respect to an oracle (we use a CLIP model in this paper). Based on the new evaluation protocol, we introduce a new robustness measurement that directly measures the robustness compared with the oracle. With our proposed evaluation protocol and metric, we give a study of current robust machine learning techniques to identify the robustness gap between existing models and the oracle. This is particularly important if the goal of a research direction is to produce models that function reliably to have performance comparable to the oracle.
Therefore, our contributions in this paper are three-fold:
• We introduce a new robustness measurement that directly measures the robustness gap between models and the oracle.
• We introduce a new evaluation protocol to generate evaluation datasets that can automatically perturb the samples to be sufficiently different from existing test samples, while maintaining the underlying unknown causal structure.
• We leverage our evaluation metric and protocol to offer a study of current robustness research to identify the robustness gap between existing models and the oracle. Our findings further bring us understandings and conjectures of the behaviors of the deep learning models.
2 BACKGROUND
2.1 CURRENT ROBUSTNESS EVALUATION PROTOCOLS
The evaluation of machine learning models in non-i.i.d scenario have been studied for more than a decade, and one of the pioneers is probably domain adaptation (Ben-David et al., 2010). In
domain adaptation, the community trains the model over data from one distribution and test the model with samples from a different distribution; in domain generalization (Muandet et al., 2013), the community trains the model over data from several related distributions and test the model with samples from yet another distribution. To be more specific, a popular benchmark dataset used in domain generalization study is the PACS dataset (Li et al., 2017), which consists the images from seven labels and four different domains (photo, art, cartoon, and sketch), and the community studies the empirical performance of models when trained over three of the domains and tested over the remaining one. To facilitate the development of cross-domain robust image classification, the community has introduced several benchmarks, such as PACS (Li et al., 2017), ImageNet-A (Hendrycks et al., 2021b), ImageNet-C (Hendrycks & Dietterich, 2019), ImageNet-Sketch (Wang et al., 2019), and collective benchmarks integrating multiple datasets such as DomainBed (Gulrajani & Lopez-Paz, 2020), WILDS (Koh et al., 2021), and OOD Bench (Ye et al., 2021).
While these datasets clearly maintain the underlying causal structure of the images, a potential issue is that these evaluation datasets are fixed once collected. Thus, if the community relies on these fixed benchmarks repeatedly to rank methods, eventually the selected best method may not be a true reflection of the world, but a model that can fit certain datasets exceptionally well. This phenomenon has been discussed by several textbooks (Duda et al., 1973; Friedman et al., 2001). While recent efforts in evaluating collections of datasets (Gulrajani & Lopez-Paz, 2020; Koh et al., 2021; Ye et al., 2021) might alleviate the above potential hazards of “model selection with test set”, a dynamic process of generating evaluation datasets will certainly further mitigate this issue.
On the other hand, one can also test the robustness of models by dynamically perturbing the existing datasets. For example, one can test the model’s robustness against rotation (Marcos et al., 2016), texture (Geirhos et al., 2019), frequency-perturbed datasets (Wang et al., 2020b), or adversarial attacks (e.g., ℓp-norm constraint perturbations) (Szegedy et al., 2013). While these tests do not require additionally collected samples, these tests typically limit the perturbations to be relatively well-defined (e.g., a texture-perturbed cat image still depicts a cat because the shape of the cat is preserved during the perturbation).
While this perturbation test strategy leads to datasets dynamically generated along the evaluation, it is usually limited by the variations of the perturbations allowed. For example, one may not be able to use some significant distortion of the images in case the object depicted may be deformed and the underlying causal structure of the images are distorted. More generally speaking, most of the current perturbation-based test protocols are scoped by the tradeoff that a minor perturbation might not introduce enough variations to the existing datasets, while a significant perturbation will potentially destroy the underlying causal structures.
2.2 ASSUMED DESIDERATA OF ROBUSTNESS EVALUATION PROTOCOL
As a reflection of the previous discussion, we attempt of offer a summary list of three desired properties of the datasets serving as the benchmarks for robustness evaluation:
• Stableness in Causal Structure: the most important property of the evaluation datasets is that the samples must represent the same underlying causal structure as the one in the training samples.
• Diversity in Generated Samples: for any other non-causal factors of the data, the test samples should cover as many as possible scenarios of the images, such as texture, styles etc.
• A Dynamic Generation Process: to mitigate selection bias of the models over techniques that focus too attentively to the specification of datasets, ideally, the evaluation protocol should consist of a dynamic set of samples, preferably generated with the tested model in consideration.
Key Contribution: To the best of our knowledge, there are no other evaluation protocols of model robustness that can meet the above three properties simultaneously. Thus, we aim to introduce a method that can evaluate model’s robustness that fulfill the three above desiderata at the same time.
2.3 NECESSITY OF NEW ROBUSTNESS MEASUREMENT IN DYNAMIC EVALUATION PROTOCOL
In previous experiments, we always have two evaluation settings: the “standard” test set, and the perturbed test set. When comparing the robustness of two models, prior arts would be to rank the models by their accuracy under perturbed test set (Geirhos et al., 2019; Hendrycks et al., 2021a;
Orhan, 2019; Xie et al., 2020; Zhang, 2019) or other quantities distinct from accuracy, e.g., inception score (Salimans et al., 2016), effective robustness (Taori et al., 2020) and relative robustness (Taori et al., 2020). These metrics are good starting points for experiments since they are precisely defined and easy to apply to evaluate robustness interventions. In the dynamic evaluation protocols, however, these quantities alone cannot provide a comprehensive measure of robustness, as two models are tested on two different “dynamical” test sets. When one model outperforms the other, we cannot distinguish whether one model is actually better than the other, or if the test set happened to be easier.
The core issue in the preceding example is that we can not find the consistent robustness measurement between two different test sets. In reality, an ideal robust model will probably behave similarly to the oracle (e.g., the human users). Thus, instead of indirectly comparing models’ robustness with each other, a measurement that directly measures models’ robustness compared with the oracle is desired.
3 METHOD - COUNTERFACTUAL GENERATION WITH SURROGATE ORACLE
3.1 METHOD OVERVIEW
We use (x,y) to denote an image sample and its corresponding label, use θ(x) to denote the model we aim to evaluate, which takes an input of the image and predicts the label.
Algorithm 1 Counterfactual Image Generation with Surrogate Oracle
Input: (X,Y), θ, g, h, total number of iterations B Output: generated dataset (X̂,Y) for each (x,y) in (X,Y) do
generate x̂0 = g(x,b0) if h(x̂0) = y then
set x̂ = x̂0 for iteration bt < B do
generate x̂t = g(x̂t−1,bt) if h(x̂t) = y then
set x̂ = x̂t else
set x̂ = x̂t−1 exit FOR loop
end if end for
else set x̂ = x end if use (x̂,y) to construct (X̂,Y)
end for
We use g(x,b) to denote an image generation system, which takes an input of the starting image x to generate the another image x̂ within the computation budget b. The generation process is performed as an optimization process to maximize a scoring function α(x̂, z) that evaluate the alignment between the generated image and generation goal z guiding the perturbation process. The higher the score is, the better the alignment is. Thus, the image generation process is formalized as
x̂ = argmax x̂=g(x,b),b<B α(g(x,b), z),
where B denotes the allowed computation budget for one sample. This budget will constrain the generated image not far from the starting image so that the generated one does not converge to a trivial solution that maximizes the scoring the function.
In addition, we choose the model classification loss l(θ(x̂),y) as z. Therefore, the scoring function essentially maximizes the loss of a given image in the direction of a different class.
Finally, to maintain the unknown causal structure of the images, we leverage the power of the pretrained giant models to scope the generation process: the generated images must be considered within the same class by the pretrained model, denoted as h(x̂), which takes in the input of the image and makes a prediction.
Connecting all the components above, the generation process will aim to optimize the following:
x̂ = argmax x̂=g(x,b),b<B,z=l(θ(x̂),y) α(g(x,b), z), subject to h(x̂) = y.
Our method is generic and agnostic to the choices of the three major components, namely θ, g, and h. For example, the g component can vary from something as simple as basic transformations adding noises or rotating images to a sophisticated method to transfer the style of the images; on the other hand, the h component can vary from an approach with high reliability and low efficiency such as
actually outsourcing the annotation process to human labors to the other polarity of simply assuming a large-scale pretrained model can function plausibly as a human.
In the next part, we will introduce our concrete choices of g and h leading to the later empirical results, which build upon the recent advances of vision research.
3.2 ENGINEERING SPECIFICATION
We use VQGAN (Esser et al., 2021) as the image generation system g(x,b), and the g(x,b) is boosted by the evaluated model θ(x) serving as the α(x̂, z) to guide the generation process, where z = l(θ(x̂),y) is the model classification loss on current perturbed images.
The generation is an iterative process guided by the scoring function: at each iteration, the system add more style-wise transformations to the result of the previous iteration. Therefore, the total number of iterations allowed is denoted as the budget B (see Section 4.5 and Appendix H for details of finding the best perturbation). In practice, the value of budget B is set based on the resource concerns.
To guarantee the causal structure of images, we use a CLIP (Radford et al., 2021) model to serve as h, and design the text fragment input of CLIP to be “an image of {class}”. We directly optimize VQGAN encoder space which guided by our scoring function. We show the algorithm in Algorithm 1.
3.2.1 SPARSE SUBMODEL OF VQGAN FOR EFFICIENT PERTURBATION
While our method will function properly as described above, we notice that the generation process still have a potential limitation: the bound-free perturbation of VQGAN will sometimes perturb the semantics of the images, generating results that will be rejected by the oracle later and thus leading to a waste of computational efforts.
To counter this challenge, we use a sparse variable selection method to analyze the embedding dimensions of VQGAN to identify a subset of dimensions that are mainly responsible for the non-semantic variations.
In particular, with a dataset (X,Y) of n samples, we first use VQGAN to generate a style-transferred dataset (X′,Y). During the generation process, we preserve the latent representations of input samples after the VQGAN encoder in the original dataset. We also preserve the final latent representations before the VQGAN decoder that are quantized after the iterations in the style-transferred dataset. Then, we create a new dataset (E,L) of 2n samples, for each sample (e, l) ∈ (E,L), e is the latent representation for the sample (from either the original dataset or the style-transferred one), and l is labelled as 0 if the sample is from the original dataset and 1 if the style-transferred dataset.
Then, we train ℓ1 regularized logistic regression model to classify the samples of (E,L). With w denoting the weights of the model, we solve the following problem
argmin w ∑ (e,l)∈(E,L) l(ew, l) + λ∥w∥1,
and the sparse pattern (zeros or not) of w will inform us about which dimensions are for the style.
3.3 MEASURING ROBUSTNESS
Oracle-oriented Robustness (OOR). By design, the causal structures of counterfactual images will be maintained by the oracle. Thus, if a model has a smaller accuracy drop on the counterfactual images, it means that the model makes more similar predictions to oracle compared to a different model. To precisely define OOR, we introduce counterfactual accuracy (CA), the accuracy on the counterfactual images that our generative model successfully produces. As SA may influence CA to some extent, to disentangle CA from SA, we normalize CA with SA as OOR:
OOR = CA
SA ×100%
In settings where the oracle is human labors, OOR measures the robustness difference between the evaluated model and human perception. In our experiment setting, OOR measures the robustness difference between models trained on fixed datasets (the evaluated model) and the model trained on unfiltered, highly varied, and highly noisy data (the oracle CLIP model).
3.4 THE NECESSITY OF THE SURROGATE ORACLE
At last, we devote a short paragraph to reminder some readers that, despite the alluring idea of designing systems that forgo the usages of underlying causal structure or oracle, it has been proved or argued multiple times that it is impossible to create that knowledge with nothing but data, in either context of machine learning (Locatello et al., 2019; Mahajan et al., 2019; Wang et al., 2021) or causality (Bareinboim et al., 2020; Xia et al., 2021),(Pearl, 2009, Sec. 1.4).
4 EXPERIMENTS - EVALUATION AND UNDERSTANDING OF MODELS
4.1 EXPERIMENT SETUP
We consider four different scenarios, ranging from the basic benchmark MNIST (LeCun et al., 1998), through CIFAR10 (Krizhevsky et al., 2009), 9-class ImageNet (Santurkar et al., 2019), to full-fledged 1000-class ImageNet (Deng et al., 2009). For ImageNet, we resize all images to 224× 224 px. We also center and re-scale the color values with µRGB = [0.485, 0.456, 0.406] and σ = [0.229, 0.224, 0.225]. The total number of iterations allowed (computation budget B) of our evaluation protocol is set as 10. We conduct the experiments on a NVIDIA GeForce RTX 3090 GPU.
For each of the experiment, we report a set of four results:
• Standard Accuracy (SA): reported for references.
• Validation Rate (VR): the percentage of images validated by the oracle that maintains the causal structure.
• Oracle-oriented Robustness (OOR): the robustness of the model compared with the oracle.
4.2 ROBUSTNESS EVALUATION FOR STANDARD VISION MODELS
We consider a large range of models (Appedix J) and evaluate pre-trained variants of a LeNet architecture (LeCun et al., 1998) for the MNIST experiment and ResNet architecture (He et al., 2016a) for the remaining experiments. For ImageNet experiment, we also consider pretrained transformer variants of ViT (Dosovitskiy et al., 2020), Swin (Liu et al., 2021), Twins (Chu et al., 2021), Visformer (Chen et al., 2021) and DeiT (Touvron et al., 2021) from the timm library (Wightman, 2019). We evaluate the most recent ConvNeXt (Liu et al., 2022) as well. All models are trained on the ILSVRC2012 subset of IN comprised of 1.2 million images in the training and a total of 1000 classes (Deng et al., 2009; Russakovsky et al., 2015).
We report our results in Table 1. As expected, these models can barely maintain its performances when tested on data from different distributions, as shown by many previous works (e.g., Geirhos et al., 2019; Hendrycks & Dietterich, 2019; Wang et al., 2020b).
Interestingly, on ImageNet, though both transformer-variants models and vanilla CNNarchitecture model, i.e., ResNet, attain similar clean image accuracy, transformer-variants substantially outperforms ResNet50 in terms of OOR under our dynamic evaluation protocol. We conjecture such performance gap is partly originated from the differences in training setups; more specifically, it may be resulted by the fact transformer-variants by default use strong data augmentation strategies while ResNet50 use none of them. The augmentation strategies (e.g., Mixup (Zhang et al., 2017), Cutmix (Yun
et al., 2019) and Random Erasing (Zhong et al., 2020), etc.) already naively introduce out-ofdistribution (OOD) samples during training, therefore are potentially helpful for securing model
robustness towards data shifts. When equiping with the similar data augmentation strategies, CNNarchitecture model, i.e., ConvNext, has achieved comparable performance in terms of OOR. This hypothesis has also been verified in recent works (Bai et al., 2021; Wang et al., 2022). We will offer more discussions on the robustness enhancing methods in Section 4.3.
Besides comparing performance between different standard models, OOR brings us the chance to directly compare models with the oracle. Across all of our experiments, the OOR shows the significant gap between models and the oracle, which is trained on the unfiltered and highly varied data, seemingly suggesting that training with a more diverse dataset would help with robustness. This overarching trend has also been identified in (Taori et al., 2020). However, quantifying when and why training with more data helps is still an interesting open question.
We also notice that the VR tends to be different for different datasets. We conjecture this is due to how the oracle model understands the images and labels, more discussions is offered in Section 5.
4.3 ROBUSTNESS EVALUATION FOR ROBUST VISION MODELS
Recently, some techniques have been introduced to cope with corruptions or style shifts. For example, by adapting the batch normalization statistics with a limited number of samples (Schneider et al., 2020), the performance on stylized images (or corrupted images) can be significantly increased. Additionally, some more sophisticated techniques, e.g., AugMix (Hendrycks et al., 2019), have also been widely employed by the community.
To investigate whether those OOD robust models can still maintain the performance under our dynamic evaluation protocol, we evaluate the pretrained ResNet50 models combining with the four leading methods from the ImageNet-C leaderboard, namely Stylized ImageNet training (SIN; (Geirhos et al., 2019)), adversarial noise training (ANT; (Rusak et al.)) as well as a combination of ANT and SIN (ANT+SIN; (Rusak et al.)), optimized data augmentation using Augmix (AugMix; (Hendrycks et al., 2019)), DeepAugment (DeepAug; (Hendrycks et al., 2021a)) and a combination of Augmix and DeepAugment (DeepAug+AM; (Hendrycks et al., 2021a)).
The results are displayed in Table 2. Surprisingly, we find that some common corruption robust models, i.e., SIN, ANT, ANT+SIN, fail to maintain their power under our dynamic evaluation protocol. We take the SIN method as an example. The OOR of SIN method is 42.92, which is even lower than that of a vanilla ResNet50. As these methods are well fitted in the benchmark ImageNet-C, such results verify the weakness of relying on fixed benchmarks to rank methods. The selected best method may not be a true reflection of the real world, but a model well fit certain datasets, which in turn proves the necessity of our dynamic evaluation protocol.
DeepAug, Augmix and DeepAug+AM perform better than SIN and ANT methods in terms of OOR as they are dynamically perturbing the datasets, which alleviates the hazards of “model selection with test set” to some extent. However, their performance is limited by the variations of the perturbations allowed, resulting in only a marginal improvement compared with the ResNet50 under our evaluation protocol.
In addition, we also visualize the counterfactual images generated according to the evaluated styleshift robust models in Figure 2. More results are shown in Appendix L. Specifically, we have the following observations:
Preservation of local textual details. A number of recent empirical findings point to an important role of object textures for CNN, where object textures are more important than global object shapes for CNN model to learn (Gatys et al., 2015; Ballester & Araujo, 2016; Gatys et al., 2017; Brendel & Bethge, 2019; Geirhos et al., 2019; Wang et al., 2020b). We notice our generated counterfactual images may preserve false local textual details, the evaluation task will become much harder since textures are no longer predictive, but instead a nuisance factor (as desired). For the counterfactual
image generated for the DeepAug method (Figure 2f), we produce a skin texture similar to chicken skin, and the fish head becomes more and more chicken-like. ResNet with DeepAug method is misled by this corruption.
Generalization to shape perturbations. Moreover, since our attack intensity could be dynamically altered based on the model’s gradient while still maintaining the causal structures, the perturbation we produce would be sufficiently that not just limited to object textures, but even be a certain degree of shape perturbation. As it is acknowledged that networks with a higher shape bias are inherently more robust to many different image distortions and reach higher performance on classification and classification tasks, we observe that the counterfactual image generated for SIN (Figure 2b and Figure 2i) and ANT+SIN (Figure 2d and Figure 2k) methods are shape-perturbed and successfully attack the models.
Recognition of model properties. With the combination of different methods, the counterfactual images generated would be more comprehensive. For example, the counterfactual image generated for DeepAug+AM (Figure 2g) would preserve the chicken-like head of DeepAug’s and skin patterns of Augmix’s. As our evaluation method does not memorize the model it evaluated, this result reveals that our method could recognize the model properties, and automatically generate those hard counterfactual images to complete the evaluation.
Overall, these visualizations reveal that our dynamic evaluation protocol dynamically adjusts attack strategies based on different model properties, and automatically generates diversified counterfactual images that complements static benchmark, i.e., ImageNet-C, to expose weaknesses for models.
4.4 UNDERSTANDING THE PROPERTIES OF OUR EVALUATION SYSTEM
We continue to investigate several properties of the models in the next couple sections. To save space, we will mainly present the results on CIFAR10 experiment here and save the details to the appendix:
• In Appendix B, we explored the transferability of the generated images. The results of a reasonable transferability suggests that our method of generating images can be potentially used in a broader scope: we can also leverage the method to generate a static set of images and set a benchmark dataset to help the development of robustness methods.
• In Appendix C, we test whether initiating the perturbation process with an adversarial example will further degrade the OOR. We find that initiaing with the FGSM adversarial examples (Goodfellow et al., 2015) barely affect the OOR.
• In Appendix D, we compare the vanilla model to a model trained by PGD (Madry et al., 2017). We find that the adversarially trained model and vanillaly trained model process the data differently. However, their robustness weak spots are exposed to a similar degree by our test system.
• In Appendix E, we explored the possibility of improving the evaluated robustness by augmenting the images with the images generated by our evaluation system. However, due to the required
computational load, we only use a static set of generated images to train the model and the results suggest that static set of images for augmentation cannot sufficiently robustify the model to our evaluation system.
• We also notice that the generated images tend to shift the color of the original images, so we tested the robustness of grayscale models in Appendix F, the results suggest removing the color channel will not improve robustness performances.
4.5 EXPERIMENTS REGARDING METHOD CONFIGURATION
Generator Configuration. We conduct ablation study on the generator choice to agree on the performance ranking in Table 1 and Table 2. We consider several image generator architechitures, namely, variational autoencoder (VAE) (Kingma & Welling, 2013; Rezende et al., 2014) like Efficient-VDVAE (Hazami et al., 2022), diffusion models (Sohl-Dickstein et al., 2015) like Improved DDPM (Nichol & Dhariwal, 2021) and ADM (Dhariwal & Nichol, 2021), and GAN like StyleGANXL (Sauer et al., 2022). As shown in Table 3, the validation rate of the oracle stays stable across all the image generators. We find that the conclusion is consistent under different generator choices, which validates the correctness of our conclusions in Section 4.2 and Section 4.3.
Sparse VQGAN. In experiments of sparse VQGAN, we find that only 0.69% dimensions are highly correlated to the style. Therefore, we mask the rest 99.31% dimensions to create a sparse submodel of VQGAN for efficient perturbation. The running time can be reduced by 12.7% on 9-class ImageNet and 28.5% on ImageNet, respectively. Details can be found in Appendix G.
Step size. We experiment on the perturbation step size to find the best perturbation under the computation budget B. We find that too small or large step size lead to slight perturbation strength while stronger image perturbation could be generated when the step size stays in a mild range, i.e., 0.1 and 0.2. Details of our experiments on step size can be found in Appendix H.
5 DISCUSSION AND CONCLUSION
Potential limitation. We notice that, the CLIP model has been influenced by the imbalance sample distributions across the internet. We provide the details of test on 9-class ImageNet for vanilla ResNet-18 in Appendix I. We observe that the oracle model can tolerate a much more significant perturbation over samples labelled as Dog (VR 0.95) or Cat (VR 0.94) than samples labelled as Primate (VR 0.48). The OOR value for Primate images are much higher than other categories, creating an illusion that the evaluated models are robust against perturbed Primate images. However, such an illusion is caused by the limitation of the pretrained models that the oracle could only handle slightly perturbed samples.
The usage of oracle. Is it cheating to use oracle? The answer might depend on perspectives, but we hope to remind some readers that, in general, it is impossible to maintain the underlying causal structure during perturbation without prior knowledge (Locatello et al., 2019; Mahajan et al., 2019; Wang et al., 2021; Bareinboim et al., 2020; Xia et al., 2021),(Pearl, 2009, Sec. 1.4).
Conclusion. To conclude, in this paper, we first summarized the common practices of model evaluation strategies for robust vision machine learning. We then discussed three desiderata of the robustness evaluation protocol. Further, we offered a simple method that can fulfill these three desiderata at the same time, serving the purpose of evaluating vision models’ robustness across generic i.i.d benchmarks, without requirement on the prior knowledge of the underlying causal structure depicted by the images, although relying on a plausible oracle.
ETHICS STATEMENT
The primary goal of this paper is to introduce a new evaluation protocol for vision machine learning research that can generate sufficiently perturbed samples from the original samples while maintaining the causal structures by assuming an oracle. Thus, we can introduce significant variations of the existing data while being free from additional human efforts. With our approach, we hope to renew the benchmarks for current robustness evaluation, offer understandings of the behaviors of deep vision models and potentially facilitate the generation of more truly robust models. Increasing the robustness of vision models can enhance their reliability and safety, which leads to the trustworthy artificial intelligence and contributes to a wide range of application scenarios (e.g., manufacturing automation, surveillance systems, etc.). Manufacturing automation can improve the production efficiency, but may also trigger social issues related to job looses and industrial restructuring. Advanced surveillance systems are conducive to improving social security, but may also raise public concerns about personal privacy violations.
We encourage further work to understand the limitations of machine vision models in OOD settings. More robust models carry the potential risk of automation bias, i.e., an undue trust in vision models. However, even if models are robust against corruptions in finite OOD datasets, they might still quickly fail on the massive generic perturbations existing in the real-world data space, i.e., the perturbations offered by our approach. Understanding under what conditions model decisions can be deemed reliable or not is still an open research question that deserves further attention.
REPRODUCIBILITY STATEMENT
Please refer to Appendix J for the references of all models we evaluated and links to the corresponding source code.
A NOTES ON THE EXPERIMENTAL SETUP
A.1 NOTES ON MODELS
Note that we only re-evaluate existing model checkpoints, and hence do not perform any hyperparameter tuning for evaluated models. Since it is possible to work with a small amount of GPU resources, our model evaluations are done on a single NVIDIA GeForce RTX 3090 GPU.
A.2 HYPERPARAMETER TUNING
Our method is generally parameter-free except for the computation budget and perturbation step size. In our experiments, the computation budget is the maximum iteration number of Sparse VQGAN. We consider the predefined value to be 10, as it guarantees the degree of perturbation with acceptable time costs. We provide the experiment for step size configuration in Section 4.5.
B TRANSFERABILITY OF GENERATED IMAGES
We first study whether our generated images are model specific, since the generation of the images involves the gradient of the original model. We train several architectures, namely EfficientNet (Tan & Le, 2019), MobileNet (Howard et al., 2017), SimpleDLA (Yu et al., 2018), VGG19 (Simonyan & Zisserman, 2014), PreActResNet (He et al., 2016b), GoogLeNet (Szegedy et al., 2015), and DenseNet121 (Huang et al., 2017) and test these models with the images. We also train another ResNet following the same procedure to check the transferability across different runs in one architecture.
Table 4: Performances of transferability.
Model SA OOR ResNet 95.38 54.17 EfficientNet 91.37 68.48 MobileNet 91.63 68.72 SimpleDLA 92.25 66.16 VGG 93.54 70.57 PreActResNet 94.06 67.25 ResNet 94.67 66.23
GoogLeNet 95.06 66.68 DenseNet 95.26 66.43
Table 4 shows a reasonable transferability of the generated images as the OOR are all lower than the SA, although we can also observe an improvement over the OOR when tested in the new models. These results suggest that our method of generating images can be potentially used in a broader scope: we can also leverage the method to generate a static set of images and set a benchmark dataset to help the development of robustness methods.
In addition, our results might potentially help mitigate a debate on whether more accurate architectures are naturally more robust: on one hand, we have results showing that more accurate architectures indeed lead to better empirical performances on certain (usually fixed) robustness benchmarks (Rozsa et al., 2016; Hendrycks & Dietterich, 2019); while on the other hand, some counterpoints suggest the higher robustness numerical performances are only because these models capture more non-robust features that also happen exist in the fixed benchmarks (Tsipras et al., 2018; Wang et al., 2020b; Taori et al., 2020). Table 4 show some examples to support the latter argument: in particular, we notice that VGG, while ranked in the middle of the accuracy ladder, interestingly stands out when tested with generated images. These results continue to support our argument that a dynamic robustness test scenario can help reveal more properties of the model.
C INITIATING WITH ADVERSARIAL ATTACKED IMAGES
Since our method using the gradient of the evaluated model reminds readers about the gradient-based attack methods in adversarial robustness literature, we test whether initiating the perturbation process with an adversarial example will further degrade the accuracy.
We first generate the images with FGSM attack (Goodfellow et al., 2015). Table 5. shows that initiating with the FGSM adversarial examples barely affect the OOR, which
is probably because the major style-wise perturbation will erase the imperceptible perturbations the adversarial examples introduce.
D ADVERSARIALLY ROBUST MODELS
With evidence suggesting the adversarially robust models are considered more human perceptually aligned (Engstrom et al., 2019; Zhang & Zhu, 2019; Wang et al., 2020b), we compare the vanilla model to a model trained by PGD (Madry et al., 2017) (ℓ∞ norm smaller than 0.03).
Table 6: Performances comparison with vanilla model and PGD trained model.
Data Model SA OOR Van. Van. 95.38 57.79PGD 85.70 95.96 PGD Van. 95.38 81.73PGD 85.70 66.18
As shown in Table 6, adversarially trained model and vanillaly trained model indeed process the data differently: the transferability of the generated images between these two regimes can barely hold. In particular, the PGD model can almost maintain its performances when tested with the images generated by the vanilla model.
However, despite the differences, the PGD model’s robustness weak spots are exposed to a similar degree with the vanilla model by our test system: the OOR of the vanilla model and the PGD model are only 57.79 and 66.18, respectively. We believe this result can further help advocate our belief that the robustness test needs to be a dynamic process generating images conditioning on the model to test, and thus further help validate the importance of our contribution.
E AUGMENTATION THROUGH STATIC ADVERSARIAL TRAINING
Intuitively, inspired by the success of adversarial training (Madry et al., 2017) in defending models against adversarial attacks, a natural method to improve the empirical performances under our new test protocol is to augment the training data with counterfactual training images generated by the same process. We aim to validate the effectiveness of this method here.
However, the computational load of generation process is not ideal to serve the standard adversarial training strategy, and we can only have one copy of the counterfactual training samples. Fortunately, we notice that some recent advances in training with data augmentation can help learn robust representations with a limited scope of augmented samples (Wang et al., 2020a), which we use here.
We report our results in Table 7. The first thing we observe is that the model trained with the augmentation data offered through our approach could preserve a relatively higher performance (OOR 89.10) when testing with the counterfactual images generated according to the vanilla model. Since we have shown the counterfactual samples have a reasonable transferability in the main manuscript, this result indicates the robustness we brought when training with the counterfactual images generated by our approach.
In addition, when tested with the counterfactual images generated according to the augmented model, both models’ performance would drop significantly, which again indicates the effectiveness of our approach.
F GRAYSCALE MODELS
Our previous visualization suggests that a shortcut the counterfactual generation system can take is to significantly shift the color of the images, for which a grey-scale model should easily maintain the performance. Thus, we train a grayscale model by changing the ResNet input channel to be 1 and
transforming the input images to be grayscale upon feeding into the model. We report the results in Table 8.
Interestingly, we notice that the grayscale model cannot defend against the shift introduced by our system by ignoring the color information. On the contrary, it seems to encourage our system to generate more counterfactual images that can lower the performances.
In addition, we visualize some counterfactual images generated according to each model and show them in Figure 3. We can see some evidence that the graycale model forces the generation system to focus more on the shape of the object and less of the color of the images. We find it particularly interesting that our system sometimes generates different images differently for different models while the resulting images deceive the respective model to make the same prediction.
G EXPERIMENTS TO SUPPORT SPARSE VQGAN
We generate the flattened latent representations of input images after the VQGAN Encoder with negative labels. Following Algorithm 1, we generate the flattened final latent representations before the VQGAN decoder with positive labels. Altogether, we form a binary classification dataset where the number of positive and negative samples is balanced. The positive samples are the latent representations of counterfactual images while the negative samples are the latent representations of input images. We set the split ratio of train and test set to be 0.8 : 0.2. We perform the explorations on various datasets, i.e. MNIST, CIFAR-10, 9-class ImageNet and ImageNet.
The classification model we consider is LASSO1 as it enables automatically feature selection with strong interpretability. We set the regularization strength to be 36.36. We adopt saga (Defazio et al., 2014) as the solver to use in the optimization process. The classification results are shown in Table 9.
1Although LASSO is originally a regression model, we probabilize the regression values to get the final classification results.
We observe that the coefficient matrix of features can be far sparser than we expect. We take the result of 9-class ImageNet as an example. Surprisingly, we find that almost 99.31% dimensions in average could be discarded when making judgements. We argue the preserved 0.69% dimensions are highly correlated to VQGAN perturbation. Therefore, we keep the corresponding 99.31% dimensions unchanged and only let the rest 0.69% dimensions participate in computation. Our computation loads could be significantly reduced while still maintain the competitive performance compared with the unmasked version2.
We conduct the run-time experiments on a single NVIDIA GeForce RTX 3090 GPU. Following our experiment setting, we evaluate a vanilla ResNet-18 on 9-class ImageNet and a vanilla ResNet-50 on ImageNet. As shown in Table 10, the run-time on ImageNet can be reduced by 28.5% with our sparse VQGAN. Compared with large-scale masked dimensions (i.e., 99.31%), we attribute the relatively incremental run-time improvement (i.e., 12.7% on 9-class ImageNet, 28.5% on ImageNet) to the fact that we have to perform mask and unmask operations each time when calculating the model gradient, which offsets the calculation efficiency brought by the sparse VQGAN to a certain extent.
H PARAMETER STUDY ON STEP SIZE
We conduct the parameter study of the perturbation step size for our evaluation system on the CIFAR10 dataset. Specifically, we tune the step size in {0.01, 0.05, 0.1, 0,2, 0.5}. The maximum iteration (computation budget B) is set to be 10. All results are produced based on the ResNet18 and averaged over five runs.
As shown in Figure 4, we observe that when the step size is too small, i.e., 0.01 and 0.05, the strength of perturbation cannot be achieved within the predefined maximum iterations, resulting in the higher score of OOR. In addition, large step size will also lead to higher OOR score. When the step size is large, i.e., 0.5, the perturbation is likely to stop after only a few iterations. This could also lead to small perturbation strength compared with the scenario where we use relatively small step size but more iterations. When the step size is 0.01, the model seems achieves the oracle-parallel performance (OOR 99.66). However, such OOR values would become meaningless due to the small perturbation strength. Moreover, when the step size stays in a mild range, i.e., 0.1 and 0.2, stronger image perturbation could be generated, while the performances at this
range stay constant. Therefore, we choose the step size of 0.1 for the experiments.
2We note that the overlapping degree of the preserved dimensions for each dataset is not high, which means that we need to specify these dimensions when facing new datasets.
I ANALYSIS OF SAMPLES THAT ARE MISCLASSIFIED BY THE MODEL
We present the results on 9-class ImageNet experiment to show the details for each category.
Table 11 shows that the VR values for most categories are still higher than 80%, some even reach 95%, which means we produce sufficient number of counterfactual images. However, we notice that the VR value for primate images is quite lower compared with other categories, indicating around 52% perturbed primate images are blocked by the orcle. We have discussed this category unbalance issue in Section 5.
As shown in Table 11, the OOR value for each category significantly drops compared with the SA value, indicating the weakness of trained models. An interesting finding is that the OOR value for Primate images are quite higher than other categories, given the fact that more perturbed Primate images are blocked by the oracle. We attribute it to the limitation of foundation models. As the CLIP model has been influenced by the imbalance sample distributions across the Internet, it could only handle easy perturbed samples well. Therefore, the counterfactual images preserved would be those that can be easily classified by the models.
J LIST OF EVALUATED MODELS
The following lists contains all models we evaluated on various datasets with references and links to the corresponding source code.
J.1 PRETRAINED VQGAN MODEL
We use the checkpoint of vqgan_imagenet_f16_16384 from https://heibox. uni-heidelberg.de/d/a7530b09fed84f80a887/
J.2 PRETRAINED CLIP MODEL
Model weights of ViT-B/32 and usage code are taken from https://github.com/openai/ CLIP
J.3 TIMM MODELS TRAINED ON IMAGENET (WIGHTMAN, 2019)
Weights are taken from https://github.com/rwightman/pytorch-image-models/ tree/master/timm/models
1. ResNet50 (He et al., 2016a) 2. ViT (Dosovitskiy et al., 2020) 3. DeiT (Touvron et al., 2021) 4. Twins (Chu et al., 2021)
5. Visformer (Chen et al., 2021) 6. Swin (Liu et al., 2021) 7. ConvNeXt (Liu et al., 2022)
J.4 ROBUST RESNET50 MODELS
1. ResNet50 SIN+IN (Geirhos et al., 2019) https://github.com/rgeirhos/ texture-vs-shape
2. ResNet50 ANT (Rusak et al.) https://github.com/bethgelab/ game-of-noise
3. ResNet50 ANT+SIN (Rusak et al.) https://github.com/bethgelab/ game-of-noise
4. ResNet50 Augmix (Hendrycks et al., 2019) https://github.com/ google-research/augmix
5. ResNet50 DeepAugment (Hendrycks et al., 2021a) https://github.com/ hendrycks/imagenet-r
6. ResNet50 DeepAugment+Augmix (Hendrycks et al., 2021a) https://github.com/ hendrycks/imagenet-r
J.5 ADDITIONAL IMAGE GENERATORS
1. Efficient-VDVAE (Hazami et al., 2022) https://github.com/Rayhane-mamah/ Efficient-VDVAE
2. Improved DDPM (Nichol & Dhariwal, 2021) https://github.com/open-mmlab/ mmgeneration/tree/master/configs/improved_ddpm
3. ADM (Dhariwal & Nichol, 2021) https://github.com/openai/ guided-diffusion
4. StyleGAN (Sauer et al., 2022) https://github.com/autonomousvision/ stylegan_xl
K LEADERBOARDS FOR ROBUST IMAGE MODEL
We launch leaderboards for robust image models. The goal of these leaderboards are as follows:
• To keep on track of state-of-the-art on each adversarial vision task and new model architectures with our dynamic evaluation process.
• To see the comparison of robust vision models at a glance (e.g., performance, speed, size, etc.). • To access their research papers and implementations on different frameworks.
We offer a sample of the robust ImageNet classification leaderboard in supplementary materials.
L ADDITIONAL COUNTERFACTUAL IMAGE SAMPLES
In Figure 5, we provide additional counterfactual images generated according to each model. We have similar observations to Section 4.3. First, the generated counterfactual images exhibit diversity that many other non-causal factors of the data would be covered, i.e., texture, shape and styles. Second, our method could recognize the model properties, and automatically generate those hard counterfactual images to complete the evaluation.
In addition, the generated images show a reasonable transferability in Table 4, indicating tha our method can be potentially used in a broader scope: we can also leverage the method to generate a static set of images and set a benchmark dataset to help the development of robustness methods. Therefore, we also offer two static benchmarks in supplementary materials that are generated based on CNN architecture, i.e., ConvNext and transformer variant, i.e., ViT, respectively. | 1. What is the main contribution of the paper regarding out-of-distribution (OOD) robustness evaluation?
2. What are the strengths and weaknesses of the proposed approach, particularly in its novelty and clarity?
3. Do you have any questions regarding the paper's content, such as unclear details or missing information? If so, please specify them.
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
Several works stress test ImageNet classifiers on out-of-distribution benchmarks, which are static, defined either through pre-defined perturbations or a new test-set collection procedure. The paper proposes measuring out-of-distribution robustness through dynamically generated samples using VQGAN for each model instead, constrained to maintain semantic information . The oracle in this case is CLIP pretrained on a large number of text-image pairs.
Strengths And Weaknesses
Strengths:
The paper leverages large pretrained models to generate per-model out-of-distribution datasets dynamically which is new.
Weakness:
The paper is unclear on several details which are important to clarify before I can assess the experiments. Please see below.
Clarity, Quality, Novelty And Reproducibility
Clarity
Several key details are unclear or missing in Section 3.1.
What exactly is the scoring function
α
? Only the inputs to the functions are provided but the exact formualation is missing.
What is the optimization objective with respect to the VQ-GAN latent space? How is the latent space perturbed?
In Section 3.2.1, the paper describes a “style-transfer” process to sparsify the VQGAN latent space. Can the authors describe the process in detail?
VR: Validation Rate: Why is the validation rate 100 or not close to 100, given that the algorithm introduces the constraint that the perturbed image has to be correctly classified in Section 3.1?
I had to read until the methods section to understand that the oracle is a pre trained CLIP model. It would be nice if this is introduced as soon as possible in the introduction.
Some typos and rephrasing:
as high as a human can reach -> as high as a human
while push the study of robustness evaluation further -> while pushing the study
More details of these lines
oracle-parallel performance -> performance comparable to the oracle
by assuming an oracle -> with respect to an oracle
instead of indirectly compare models’ robustness -> instead of indirectly comparing models robustness
in either machine learning context or causality context -> in either context of machine learning or causality.
the accuracy on the images our generation process successfully produces a counterfactual image -> the accuracy on the counterfactual * images, that our generative model successfully produces
s when tested by data from different distributions -> tested on data from different |
ICLR | Title
Truth or backpropaganda? An empirical investigation of deep learning theory
Abstract
We empirically evaluate common assumptions about neural networks that are widely held by practitioners and theorists alike. In this work, we: (1) prove the widespread existence of suboptimal local minima in the loss landscape of neural networks, and we use our theory to find examples; (2) show that small-norm parameters are not optimal for generalization; (3) demonstrate that ResNets do not conform to wide-network theories, such as the neural tangent kernel, and that the interaction between skip connections and batch normalization plays a role; (4) find that rank does not correlate with generalization or robustness in a practical setting.
N/A
We empirically evaluate common assumptions about neural networks that are widely held by practitioners and theorists alike. In this work, we: (1) prove the widespread existence of suboptimal local minima in the loss landscape of neural networks, and we use our theory to find examples; (2) show that small-norm parameters are not optimal for generalization; (3) demonstrate that ResNets do not conform to wide-network theories, such as the neural tangent kernel, and that the interaction between skip connections and batch normalization plays a role; (4) find that rank does not correlate with generalization or robustness in a practical setting.
1 INTRODUCTION
Modern deep learning methods are descendent from such long-studied fields as statistical learning, optimization, and signal processing, all of which were built on mathematically rigorous foundations. In statistical learning, principled kernel methods have vastly improved the performance of SVMs and PCA (Suykens & Vandewalle, 1999; Schölkopf et al., 1997), and boosting theory has enabled weak learners to generate strong classifiers (Schapire, 1990). Optimizers in deep learning are borrowed from the field of convex optimization , where momentum optimizers (Nesterov, 1983) and conjugate gradient methods provably solve ill-conditioned problems with high efficiency (Hestenes & Stiefel, 1952). Deep learning harnesses foundational tools from these mature parent fields.
Despite its rigorous roots, deep learning has driven a wedge between theory and practice. Recent theoretical work has certainly made impressive strides towards understanding optimization and generalization in neural networks. But doing so has required researchers to make strong assumptions and study restricted model classes.
In this paper, we seek to understand whether deep learning theories accurately capture the behaviors and network properties that make realistic deep networks work. Following a line of previous work, such as Swirszcz et al. (2016), Zhang et al. (2016), Balduzzi et al. (2017) and Santurkar et al. (2018), we put the assumptions and conclusions of deep learning theory to the test using experiments with both toy networks and realistic ones. We focus on the following important theoretical issues:
∗Authors contributed equally.
• Local minima: Numerous theoretical works argue that all local minima of neural loss functions are globally optimal or that all local minima are nearly optimal. In practice, we find highly suboptimal local minima in realistic neural loss functions, and we discuss reasons why suboptimal local minima exist in the loss surfaces of deep neural networks in general.
• Weight decay and parameter norms: Research inspired by Tikhonov regularization suggests that low-norm minima generalize better, and for many, this is an intuitive justification for simple regularizers like weight decay. Yet for neural networks, it is not at all clear which form of `2-regularization is optimal. We show this by constructing a simple alternative: biasing solutions toward a non-zero norm still works and can even measurably improve performance for modern architectures.
• Neural tangent kernels and the wide-network limit: We investigate theoretical results concerning neural tangent kernels of realistic architectures. While stochastic sampling of the tangent kernels suggests that theoretical results on tangent kernels of multi-layer networks may apply to some multi-layer networks and basic convolutional architectures, the predictions from theory do not hold for practical networks, and the trend even reverses for ResNet architectures. We show that the combination of skip connections and batch normalization is critical for this trend in ResNets.
• Rank: Generalization theory has provided guarantees for the performance of low-rank networks. However, we find that regularization which encourages high-rank weight matrices often outperforms that which promotes low-rank matrices. This indicates that low-rank structure is not a significant force behind generalization in practical networks. We further investigate the adversarial robustness of low-rank networks, which are thought to be more resilient to attack, and we find empirically that their robustness is often lower than the baseline or even a purposefully constructed high-rank network.
2 LOCAL MINIMA IN LOSS LANDSCAPES: DO SUBOPTIMAL MINIMA EXIST?
It is generally accepted that “in practice, poor local minima are rarely a problem with large networks.” (LeCun et al., 2015). However, exact theoretical guarantees for this statement are elusive. Various theoretical studies of local minima have investigated spin-glass models (Choromanska et al., 2014), deep linear models (Laurent & Brecht, 2018; Kawaguchi, 2016), parallel subnetworks (Haeffele & Vidal, 2017), and dense fully connected models (Nguyen et al., 2018) and have shown that either all local minima are global or all have a small optimality gap. The apparent scarcity of poor local minima has lead practitioners to develop the intuition that bad local minima (“bad” meaning high loss value and suboptimal training performance) are practically non-existent.
To further muddy the waters, some theoretical works prove the existence of local minima. Such results exist for simple fully connected architectures (Swirszcz et al., 2016), single-layer networks (Liang et al., 2018; Yun et al., 2018), and two-layer ReLU networks (Safran & Shamir, 2017). For example, (Yun et al., 2019) show that local minima exist in single-layer networks with univariate output and unique datapoints. The crucial idea here is that all neurons are activated for all datapoints at the suboptimal local minima. Unfortunately, these existing analyses of neural loss landscapes require strong assumptions (e.g. random training data, linear activation functions, fully connected layers, or extremely wide network widths) — so strong, in fact, that it is reasonable to question whether these results have any bearing on practical neural networks or describe the underlying cause of good optimization performance in real-world settings.
In this section, we investigate the existence of suboptimal local minima from a theoretical perspective and an empirical one. If suboptimal local minima exist, they are certainly hard to find by standard methods (otherwise training would not work). Thus, we present simple theoretical results that inform us on how to construct non-trivial suboptimal local minima, concretely generalizing previous constructions, such as those by (Yun et al., 2019). Using experimental methods inspired by theory, we easily find suboptimal local minima in the loss landscapes of a range of classifiers.
Trivial local minima are easy to find in ReLU networks – consider the case where bias values are sufficiently low so that the ReLUs are “dead” (i.e. inputs to ReLUs are strictly negative). Such a point is trivially a local minimum. Below, we make a more subtle observation that multilayer perceptrons (MLPs) must have non-trivial local minima, provided there exists a linear classifer that
performs worse than the neural network (an assumption that holds for virtually any standard benchmark problem). Specifically, we show that MLP loss functions contain local minima where they behave identically to a linear classifier on the same data.
We now define a family of low-rank linear functions which represent an MLP. Let “rank-s affine function” denote an operator of the form G(x) = Ax + b with rank(A) = s. Definition 2.1. Consider a family of functions, {Fφ : Rm → Rn}φ∈RP parameterized by φ.We say this family has rank-s affine expression if for all rank-s affine functions G : Rm → Rn and finite subsets Ω ⊂ Rm, there exists φ with Fφ(x) = G(x), ∀x ∈ Ω. If s = min(n,m) we say that this family has full affine expression.
We investigate a family of L-layer MLPs with ReLU activation functions, {Fφ : Rm → Rn}φ∈Φ, and parameter vectors φ, i.e., φ = (A1,b1, A2,b2, . . . , AL,bL), Fφ(x) = HL(f(HL−1...f(H1(x)))), where f denotes the ReLU activation function and Hi(z) = Aiz + bi. Let Ai ∈ Rni×ni−1 , bi ∈ Rni with n0 = m and nL = n. Lemma 1. Consider a family of L-layer multilayer perceptrons with ReLU activations {Fφ : Rm → Rn}φ∈Φ, and let s = mini ni be the minimum layer width. Such a family has rank-s affine expression.
Proof. The idea of the proof is to use the singular value decomposition of any rank-s affine function to construct the MLP layers and pick a bias large enough for all activations to remain positive. See Appendix A.1.
The ability of MLPs to represent linear networks allows us to derive a theorem which implies that arbitrarily deep MLPs have local minima at which the performance of the underlying model on the training data is equal to that of a (potentially low-rank) linear model. In other words, neural networks inherit the local minima of elementary linear models. Theorem 1. Consider a training set, {(xi, yi)}Ni=1, a family {Fφ}φ of MLPs with s = mini ni being the smallest width. Consider a parameterized affine function GA,b solving
min A,b L(GA,b; {(xi, yi)}Ni=1), subject to rank(A) ≤ s, (1)
for a continuous loss function L. Then, for each local minimum, (A′,b′), of the above training problem, there exists a local minimum, φ′, of the MLP loss L(Fφ; {(xi, yi)}Ni=1) with the property that Fφ′(xi) = GA′,b′(xi) for i = 1, 2, ..., N .
Proof. See appendix A.2.
The proof of the above theorem constructs a network in which all activations of all training examples are positive, generalizing previous constructions of this type such as Yun et al. (2019) to more realistic architectures and settings. Another paper has employed a similar construction concurrently to our own work (He et al., 2020). We do expect that the general problem in expressivity occurs every time the support of the activations coincides for all training examples, as the latter reduces the deep network to an affine linear function (on the training set), which relates to the discussion in Balduzzi et al. (2017). We test this hypothesis below by initializing deep networks with biases of high variance. Remark 2.1 (CNN and more expressive local minima). Note that the above constructions of Lemma 1 and Theorem 1 are not limited to MLPs and could be extended to convolutional neural networks with suitably restricted linear mappings Gφ by using the convolution filters to represent identities and using the bias to avoid any negative activations on the training examples. Moreover, shallower MLPs can similarly be embedded into deeper MLPs recursively by replicating the behavior of each linear layer of the shallow MLP with several layers of the deep MLP. Linear classifiers, or even shallow MLPs, often have higher training loss than more expressive networks. Thus, we can use the idea of Theorem 1 to find various suboptimal local minima in the loss landscapes of neural networks. We confirm this with subsequent experiments.
We find that initializing a network at a point that approximately conforms to Theorem 1 is enough to get trapped in a bad local minimum. We verify this by training a linear classifier on CIFAR-10 with
weight decay, (which has a test accuracy of 40.53%, loss of 1.57, and gradient norm of 0.00375 w.r.t to the logistic regression objective). We then initialize a multilayer network as described in Lemma 1 to approximate this linear classifier and recompute these statistics on the full network (see Table 1). When training with this initialization, the gradient norm drops futher, moving parameters even closer to the linear minimizer. The final training result still yields positive activations for the entire training dataset.
Moreover, any isolated local minimum of a linear network results in many local minima of an MLP Fφ′ , as the weights φ′ constructed in the proof of Theorem 1 can undergo transformations such as scaling, permutation, or even rotation without changing Fφ′ as a function during inference, i.e. Fφ′(x) = Fφ(x) for all x for an infinite set of parameters φ, as soon as F has at least one hidden layer.
While our first experiment initializes a deep MLP at a local minimum it inherited from a linear one to empirically illustrate our findings of Theorem 1, Table 1 also illustrates that similarly bad local minima are obtained when choosing large biases (third row) and choosing biases with large variance (fourth row) as conjectured above. To significantly reduce the bias, however, and still obtain a subpar optimum, we need to rerun the experiment with SGD without momentum, as shown in the last row, reflecting common intuition that momentum is helpful to move away from bad local optima.
Remark 2.2 (Sharpness of sub-optimal local optima). An interesting additional property of minima found using the previously discussed initializations is that they are “sharp”. Proponents of the sharpflat hypothesis for generalization have found that minimizers with poor generalization live in sharp attracting basins with low volume and thus low probability in parameter space (Keskar et al., 2016; Huang et al., 2019), although care has to be taken to correctly measure sharpness (Dinh et al., 2017). Accordingly, we find that the maximum eigenvalue of the Hessian at each suboptimal local minimum is significantly higher than those at near-global minima. For example, the maximum eigenvalue of the initialization by Lemma 1 in Table 1 is estimated as 113, 598.85 after training, whereas that of the default initialization is only around 24.01. While our analysis has focused on sub-par local optima in training instead of global minima with sub-par generalization, both the scarcity of local optima during normal training and the favorable generalization properties of neural networks seem to correlate with their sharpness.
In light of our finding that neural networks trained with unconventional initialization reach suboptimal local minima, we conclude that poor local minima can readily be found with a poor choice of hyperparameters. Suboptimal minima are less scarce than previously believed, and neural networks avoid these because good initializations and stochastic optimizers have been fine-tuned over time. Fortunately, promising theoretical directions may explain good optimization performance while remaining compatible with empirical observations. The approach followed by Du et al. (2019) analyzes the loss trajectory of SGD, showing that it avoids bad minima. While this work assumes (unrealistically) large network widths, this theoretical direction is compatible with empirical studies, such as Goodfellow et al. (2014), showing that the training trajectory of realistic deep networks does not encounter significant local minima.
3 WEIGHT DECAY: ARE SMALL `2-NORM SOLUTIONS BETTER?
Classical learning theory advocates regularization for linear models, such as SVM and linear regression. For SVM, `2 regularization endows linear classifiers with a wide-margin property (Cortes & Vapnik, 1995), and recent work on neural networks has shown that minimum norm neural network interpolators benefit from over-parametrization (Hastie et al., 2019) . Following the long history of explicit parameter norm regularization for linear models, weight decay is used for training nearly all high performance neural networks (He et al., 2015a; Chollet, 2016; Huang et al., 2017; Sandler et al., 2018).
In combination with weight decay, all of these cutting-edge architectures also employ batch normalization after convolutional layers (Ioffe & Szegedy, 2015). With that in mind, van Laarhoven (2017) shows that the regularizing effect of weight decay is counteracted by batch normalization, which removes the effect of shrinking weight matrices. Zhang et al. (2018) argue that the synergistic interaction between weight decay and batch norm arises because weight decay plays a large role in regulating the effective learning rate of networks, since scaling down the weights of convolutional layers amplifies the effect of each optimization step, effectively increasing the learning rate. Thus, weight decay increases the effective learning rate as the regularizer drags the parameters closer and closer towards the origin. The authors also suggest that data augmentation and carefully chosen learning rate schedules are more powerful than explicit regularizers like weight decay.
Other work echos this sentiment and claims that weight decay and dropout have little effect on performance, especially when using data augmentation (Hernández-Garcı́a & König, 2018). Hoffer et al. (2018) further study the relationship between weight decay and batch normalization, and they develop normalization with respect to other norms. Shah et al. (2018) instead suggest that minimum norm solutions may not generalize well in the over-parametrized setting.
We find that the difference between performance of standard network architectures with and without weight decay is often statistically significant, even with a high level of data augmentation, for example, horizontal flips and random crops on CIFAR-10 (see Tables 2 and 3). But is weight decay the most effective form of `2 regularization? Furthermore, is the positive effect of weight decay because the regularizer promotes small norm solutions? We generalize weight decay by biasing the `2 norm of the weight vector towards other values using the following regularizer, which we call norm-bias:
Rµ(φ) = ∣∣∣∣∣ ( P∑ i=1 φ2i ) − µ2 ∣∣∣∣∣ . (2) R0 is equivalent to weight decay, but we find that we can further improve performance by biasing the weights towards higher norms (see Tables 2 and 3). In our experiments on CIFAR-10 and CIFAR-100, networks are trained using weight decay coefficients from their respective original papers. ResNet-18 and DenseNet are trained with µ2 = 2500 and norm-bias coefficient 0.005, and MobileNetV2 is trained with µ2 = 5000 and norm-bias coefficient 0.001. µ is chosen heuristically by first training a model with weight decay, recording the norm of the resulting parameter vector, and setting µ to be slightly higher than that norm in order to avoid norm-bias leading to a lower parameter norm than weight decay. While we find that weight decay improves results over a nonregularized baseline for all three models, we also find that models trained with large norm bias (i.e., large µ) outperform models trained with weight decay.
These results lend weight to the argument that explicit parameter norm regularization is in fact useful for training networks, even deep CNNs with batch normalization and data augmentation. However, the fact that norm-biased networks can outperform networks trained with weight decay suggests that any benefits of weight decay are unlikely to originate from the superiority of small-norm solutions.
To further investigate the effect of weight decay and parameter norm on generalization, we also consider models without batch norm. In this case, weight decay directly penalizes the norm of the linear operators inside a network, since there are no batch norm coefficients to compensate for the effect of shrinking weights. Our goal is to determine whether small-norm solutions are superior in this setting where the norm of the parameter vector is more meaningful.
In our first experiment without batch norm, we experience improved performance training an MLP with norm-bias (see Table 3). In a state-of-the-art setting, we consider ResNet-20 with Fixup initialization, a ResNet variant that removes batch norm and instead uses a sophisticated initialization
that solves the exploding gradient problem (Zhang et al., 2019). We observe that weight decay substantially improves training over SGD with no explicit regularization — in fact, ResNets with this initialization scheme train quite poorly without explicit regularization and data normalization. Still, we find that norm-bias with µ2 = 1000 and norm-bias coefficient 0.0005 achieves better results than weight decay (see Table 3). This once again refutes the theory that small-norm parameters generalize better and brings into doubt any relationship between classical Tikhonov regularization and weight decay in neural networks. See Appendix A.5 for a discussion concerning the final parameter norms of Fixup networks as well as additional experiments on CIFAR-100, a harder image classification dataset.
4 KERNEL THEORY AND THE INFINITE-WIDTH LIMIT
In light of the recent surge of works discussing the properties of neural networks in the infinitewidth limit, in particular, connections between infinite-width deep neural networks and Gaussian processes, see Lee et al. (2017), several interesting theoretical works have appeared. The wide network limit and Gaussian process interpretations have inspired work on the neural tangent kernel (Jacot et al., 2018), while Lee et al. (2019) and Bietti et al. (2018) have used wide network assumptions to analyze the training dynamics of deep networks. The connection of deep neural networks to kernel-based learning theory seems promising, but how closely do current architectures match the predictions made for simple networks in the large-width limit?
We focus on the Neural Tangent Kernel (NTK), developed in Jacot et al. (2018). Theory dictates that, in the wide-network limit, the neural tangent kernel remains nearly constant as a network trains. Furthermore, neural network training dynamics can be described as gradient descent on a convex functional, provided the NTK remains nearly constant during training (Lee et al., 2019). In this section, we experimentally test the validity of these theoretical assumptions.
Fixing a network architecture, we use F to denote the function space parametrized by φ ∈ Rp. For the mapping F : RP → F , the NTK is defined by
Φ(φ) = P∑ p=1 ∂φpF (φ)⊗ ∂φpF (φ), (3)
where the derivatives ∂φpF (φ) are evaluated at a particular choice of φ describing a neural network. The NTK can be thought of as a similarity measure between images; given any two images as input, the NTK returns an n × n matrix, where n is the dimensionality of the feature embedding of the neural network. We sample entries from the NTK by drawing a set ofN images {xi} from a dataset,
and computing the entries in the NTK corresponding to all pairs of images in our image set. We do this for a random neural network f : Rm → Rn and computing the tensor Φ(φ) ∈ RN×N×n×n of all pairwise realizations, restricted to the given data:
Φ(φ)ijkl = P∑ p=1 ∂φpf(xi, φ)k · ∂φpf(xj , φ)l (4)
By evaluating Equation 4 using automatic differentiation, we compute slices from the NTK before and after training for a large range of architectures and network widths. We consider image classification on CIFAR-10 and compare a two-layer MLP, a four-layer MLP, a simple 5-layer ConvNet, and a ResNet. We draw 25 random images from CIFAR-10 to sample the NTK before and after training. We measure the change in the NTK by computing the correlation coefficient of the (vectorized) NTK before and after training. We do this for many network widths, and see what happens in the wide network limit. For MLPs we increase the width of the hidden layers, for the ConvNet (6-Layer, Convolutions, ReLU, MaxPooling), we increase the number of convolutional filters, for the ResNet we consider the WideResnet (Zagoruyko & Komodakis, 2016) architecture, where we increase its width parameter. We initialize all models with uniform He initialization as discussed in He et al. (2015b), departing from specific Gaussian initializations in theoretical works to analyze the effects for modern architectures and methodologies.
The results are visualized in Figure 1, where we plot parameters of the NTK for these different architectures, showing how the number of parameters impacts the relative change in the NTK (||Φ1 − Φ0||/||Φ0||, where Φ0/Φ1 denotes the sub-sampled NTK before/after training) and correlation coefficient (Cov(Φ1,Φ0)/σ(Φ1)/σ(Φ0)). Jacot et al. (2018) predicts that the NTK should change very little during training in the infinite-width limit.
At first glance, it might seem that these expectations are hardly met for our (non-infinite) experiments. Figure 1a and Figure 1c show that the relative change in the NTK during training (and also
the magnitude of the NTK) is rapidly increasing with width and remains large in magnitude for a whole range of widths of convolutional architectures. The MLP architectures do show a trend toward small changes in the NTK, yet convergence to zero is slower in the 4-Layer case than in the 2-Layer case.
However, a closer look shows that almost all of the relative change in the NTK seen in Figure 1c is explained by a simple linear re-scaling of the NTK. It should be noted that the scaling of the NTK is strongly effected by the magnitude of parameters at initialization. Within the NTK theory of Lee et al. (2017), a linear rescaling of the NTK during training corresponds simply to a change in learning rate, and so it makes more sense to measure similarity using a scale-invariant metric.
Measuring similarity between sub-sampled NTKs using the scale-invariant correlation coefficient, as in Figure 1b, is more promising. Surprisingly, we find that, as predicted in Jacot et al. (2018), the NTK changes very little (beyond a linear rescaling) for the wide ConvNet architectures. For the dense networks, the predicted trend toward small changes in the NTK also holds for most of the evaluated widths, although there is a dropoff at the end which may be an artifact of the difficulty of training these wide networks on CIFAR-10. For the Wide Residual Neural Networks, however, the general trend toward higher correlation in the wide network limit is completely reversed. The correlation coefficient decreases as network width increases, suggesting that the neural tangent kernel at initialization and after training becomes qualitatively more different as network width increases. The reversal of the correlation trend seems to be a property which emerges from the interaction of batch normalization and skip connections. Removing either of these features from the architecture leads to networks which have an almost constant correlation coefficient for a wide range of network widths, see Figure 6 in the appendix, calling for the consideration of both properties in new formulations of the NTK.
In conclusion, we see that although the NTK trends towards stability as the width of simple architectures increases, the opposite holds for the highly performant Wide ResNet architecture. Even further, neither the removal of batch normalization or the removal of skip connections fully recover the positive NTK trend. While we have hope that kernel-based theories of neural networks may yield guarantees for realistic (albeit wide) models in the future, current results do not sufficiently describe state-of-the-art architectures. Moreover, the already good behavior of models with unstable NTKs is an indicator that good optimization and generalization behaviors do not fundamentally hinge on the stability of the NTK.
5 RANK: DO NETWORKS WITH LOW-RANK LAYERS GENERALIZE BETTER?
State-of-the-art neural networks are highly over-parameterized, and their large number of parameters is a problem both for learning theory and for practical use. In the theoretical setting, rank has been used to tighten bounds on the generalization gap of neural networks. Generalization bounds from Harvey et al. (2017) are improved under conditions of low rank and high sparsity (Neyshabur et al., 2017) of parameter matrices, and the compressibility of low-rank matrices (and other lowdimensional structure) can be directly exploited to provide even stronger bounds (Arora et al., 2018). Further studies show a tendency of stochastic gradient methods to find low-rank solutions (Ji & Telgarsky, 2018). The tendency of SGD to find low-rank operators, in conjunction with results showing generalization bounds for low-rank operators, might suggest that the low-rank nature of these operators is important for generalization.
Langenberg et al. (2019) claim that low-rank networks, in addition to generalizing well to test data, are more robust to adversarial attacks. Theoretical and empirical results from the aforementioned paper lead the authors to make two major claims. First, the authors claim that networks which undergo adversarial training have low-rank and sparse matrices. Second, they claim that networks with low-rank and sparse parameter matrices are more robust to adversarial attacks. We find in our experiments that neither claim holds up in practical settings, including ResNet-18 models trained on CIFAR-10.
We test the generalization and robustness properties of neural networks with low-rank and highrank operators by promoting low-rank or high-rank parameter matrices in late epochs. We employ the regularizer introduced in Sedghi et al. (2018) to create the protocols RankMin, to find low-rank parameters, and RankMax, to find high-rank parameters. RankMin involves fine-tuning a pre-trained
model by replacing linear operators with their low-rank approximations, retraining, and repeating this process. Similarly, RankMax involves fine-tuning a pre-trained model by clipping singular values from the SVD of parameter matrices in order to find high-rank approximations. We are able to manipulate the rank of matrices without strongly affecting the performance of the network. We use both natural training and 7-step projected gradient descent (PGD) adversarial training routines (Madry et al., 2017). The goal of the experiment is to observe how the rank of weight matrices impacts generalization and robustness. We start by attacking naturally trained models with the standard PGD adversarial attack with = 8/255. Then, we move to the adversarial training setting and test the effect of manipulating rank on generalization and on robustness.
In order to compare our results with Langenberg et al. (2019), we borrow the notion of effective rank, denoted by r(W ) for some matrixW . This continuous relaxation of rank is defined as follows. r(W ) = ‖W‖∗‖W‖F where ‖ · ‖∗, ‖ · ‖1, and ‖ · ‖F are the nuclear norm, the 1-norm, and the Frobenius norm, respectively. Note that the singular values of convolution operators can be found quickly with a method from Sedghi et al. (2018), and that method is used here.
In our experiments we investigate two architectures, ResNet-18 and ResNet-18 without skip connections. We train on CIFAR-10 and CIFAR-100, both naturally and adversarially. Table 4 shows that RankMin and RankMax achieve similar generalization on CIFAR-10. More importantly, when adversarially training, a setting when robustness is undeniably the goal, we see the RankMax outperforms both RankMin and standard adversarial training in robust accuracy. Figure 2 confirms that
these two training routines do, in fact, control effective rank. Experiments with CIFAR-100 yield similar results and are presented in Appendix A.7. It is clear that increasing rank using an analogue of rank minimizing algorithms does not harm performance. Moreover, we observe that adversarial robustness does not imply low-rank operators, nor do low-rank operators imply robustness. The findings in Ji & Telgarsky (2018) are corroborated here as the black dots in Figures 2 show that initializations are higher in rank than the trained models. Our investigation into what useful intuition in practical cases can be gained from the theoretical work on the rank of CNNs and from the claims about adversarial robustness reveals that rank plays little to no role in the performance of CNNs in the practical setting of image classification.
6 CONCLUSION
This work highlights the gap between deep learning theory and observations in the real-world setting. We underscore the need to carefully examine the assumptions of theory and to move past the study of toy models, such as deep linear networks or single-layer MLPs, whose traits do not describe those of the practical realm. First, we show that realistic neural networks on realistic learning problems contain suboptimal local minima. Second, we show that low-norm parameters may not be optimal for neural networks, and in fact, biasing parameters to a non-zero norm during training improves performance on several popular datasets and a wide range of networks. Third, we show that the wide-network trends in the neural tangent kernel do not hold for ResNets and that the interaction between skip connections and batch normalization play a large role. Finally, we show that low-rank linear operators and robustness are not correlated, especially for adversarially trained models.
ACKNOWLEDGMENTS
This work was supported by the AFOSR MURI Program, the National Science Foundation DMS directorate, and also the DARPA YFA and L2M programs. Additional funding was provided by the Sloan Foundation.
A APPENDIX
A.1 PROOF OF LEMMA 1
Lemma 1. Consider a family of L-layer multilayer perceptrons with ReLU activations {Fφ : Rm → Rn} and let s = mini ni be the minimum layer width. Then this family has rank-s affine expression.
Proof. Let G be a rank-s affine function, and Ω ⊂ Rm be a finite set. Let G(x) = Ax + b with A = UΣV being the singular value decomposition of A with U ∈ Rn×s and V ∈ Rs×m. We define
A1 = [ ΣV 0 ] where 0 is a (possibly void) (n1 − s) × m matrix of all zeros, and b1 = c1 for c = maxxi∈Ω,1≤j≤n1 |(A1xi)j | + 1 and 1 ∈ Rn1 being a vector of all ones. We further choose Al ∈ Rnl×nl−1 to have an s × s identity matrix in the upper left, and fill all other entries with zeros. This choice is possible since nl ≥ s for all l. We define bl = [0 c 1]
T ∈ Rnl where 0 ∈ R1×s is a vector of all zeros and 1 ∈ R1×(nl−s) is a (possibly void) vector of all ones. Finally, we choose AL = [U 0], where now 0 is a (possibly void) n × (nL−1 − s) matrix of all zeros, and bL = −cAL1 + b for 1 ∈ RnL−1 being a vector of all ones. Then one readily checks that Fφ(x) = G(x) holds for all x ∈ Ω. Note that all entries of all activations are greater or equal to c > 0, such that no ReLU ever maps an entry to zero.
A.2 PROOF OF THEOREM 1
Theorem 1. Consider a training set, {(xi, yi)}Ni=1, a family {Fφ} of MLPs with s = mini ni being the smallest width. Consider the training of a rank-s linear classifier GA,b, i.e.,
min A,b L(GA,b; {(xi, yi)}Ni=1), subject to rank(A) ≤ s, (5)
for any continuous loss function L. Then for each local minimum, (A′,b′), of the above training problem, there exists a local minimum, φ′, of L(Fφ; {(xi, yi)}Ni=1) with the property that Fφ′(xi) = GA′,b′(xi) for i = 1, 2, ..., N .
Proof. Based on the definition of a local minimium, there exists an open ball D around (A′,b′) such that
L(GA′,b′ ; {(xi, yi)}Ni=1) ≤ L(GA,b; {(xi, yi)}Ni=1) ∀(A,b) ∈ D with rank(A) ≤ s. (6)
First, we use the same construction as in the proof of Lemma 1 to find a function Fφ′ with Fφ′(xi) = GA′,b′(xi) for all training example xi. Because the mapping φ 7→ Fφ(xi) is continuous (not only for the entire network F but also for all subnetworks), and because all activations of Fφ′ are greater or equal to c > 0, there exists an open ballB(φ′, δ1) around φ′ such that the activations of Fφ remain positive for all xi and all φ ∈ B(φ′, δ1). Consequently, the restriction of Fφ to the training set remains affine linear for φ ∈ B(φ′, δ1). In other words, for any φ ∈ B(φ′, δ1) we can write
Fφ(xi) = A(φ)xi + b(φ) ∀xi, by defining A(φ) = ALAL−1 . . . A1 and b(φ) = ∑L l=1ALAL−1 . . . Al+1bl. Note that due to s = mini ni, the resulting A(φ) satisfies rank(A(φ)) ≤ s. After restricting φ to an open ball B(φ′, δ2), for δ2 ≤ δ1 sufficiently small, the above (A(φ),b(φ)) satisfy (A(φ),b(φ)) ∈ D for all φ ∈ B(φ′, δ2). On this set, we, however, already know that the loss can only be greater or equal to L(Fφ′ ; {(xi, yi)}Ni=1) due to equation 6. Thus, φ′ is a local minimum of the underlying loss function.
A.3 ADDITIONAL COMMENTS REGARDING THEOREM 1
Note that our theoretical and experimental results do not contradict theoretical guarantees for deep linear networks (Kawaguchi, 2016; Laurent & Brecht, 2018) which show that all local minima are global. A deep linear network with s = min(n,m) is equivalent to a linear classifier, and in this case, the local minima constructed by Theorem 1 are global. However, this observation shows that Theorem 1 characterizes the gap between deep linear and deep nonlinear networks; the global minima predicted by linear network theories are inherited as (usually suboptimal) local minima when ReLU’s are added. Thus, linear networks do not accurately describe the distribution of minima in non-linear networks.
A.4 ADDITIONAL RESULTS FOR SUBOPTIMAL LOCAL OPTIMA
Table 5 shows more experiments. As above in the previous experiment, we use gradient descent to train a full ResNet-18 architecture on CIFAR-10 until convergence from different initializations. We find that essentially the same results appear for the deeper architecture, initializing with very high bias leads to highly non-optimal solutions. In this case even solutions that are equally bad as a zero-norm initialization.
Further results on CIFAR-100 are shown in Tables 6 and 7. These experiments with MLP and ResNet-18 show the same trends as explained above, thus confirming that the results are not specific to the CIFAR-10 dataset.
A.5 DETAILS CONCERNING LOW-NORM REGULARIZATION EXPERIMENTS
Our experiments comparing regularizers all run for 300 epochs with an initial learning rate of 0.1 and decreases by a factor of 10 at epochs 100, 175, 225, and 275. We use the SGD optimizer with momentum 0.9.
We also tried negative weight decay coefficients, which leads to ResNet-18 CIFAR-10 performance above 90% while blowing up parameter norm, but this performance is still suboptimal and is not informative concerning the optimality of minimum norm solutions. One might wonder if high norm-bias coefficients lead to even lower parameter norm than low weight decay coefficients. This
question may not be meaningful in the case of networks with batch normalization. In the case of ResNet-20 with Fixup, which does not contain running mean and standard deviation, the average parameter `2 norm after training with weight decay is 24.51 while that of models trained with normbias is 31.62. Below, we perform the same tests on CIFAR-100, a substantially more difficult dataset. Weight decay coefficients are chosen to be ones used in the original paper for the corresponding architecture. Norm-bias coefficient/µ2 is chosen to be 8100/0.005, 7500/0.001, and 2000/0.0005 for ResNet-18, DenseNet-40, and ResNet-20 with Fixup, respectively, using the same heuristic as described in the main body.
A.6 DETAILS ON THE NEURAL TANGENT KERNEL EXPERIMENT
For further reference, we include details on the NTK sampling during training epochs in Figure 3. We see that the parameter norm (Right) behaves normally (all of these experiments are trained with a standard weight decay parameter of 0.0005), yet the NTK norm (Left) rapidly increases. Most of this increase, however is scaling of the kernel, as the correlation plot (Middle) is much less drastic. We do see that most change happens in the very first epochs of training, whereas the kernel only changes slowly later on.
A.7 DETAILS ON RANKMIN AND RANKMAX
We employ routines to promote both low-rank and high-rank parameter matrices. We do this by computing approximations to the linear operators at each layer. Since convolutional layers are linear operations, we know that there is a matrix whose dimensions are the number of parameters in the input to the convolution and the number of parameters in the output of the convolution. In order to compute low-rank approximations of these operators, one could write down the matrix corresponding to the convolution, and then compute a low-rank approximation using a singular value decomposition (SVD). In order to make this problem computationally tractable we used the method for computing singular values of convolution operators derived in Sedghi et al. (2018). We were then able to do low-rank approximation in the classical sense, by setting each singular value below some threshold to zero. In order to compute high-rank operators, we clipped the singular values so that when mulitplying the SVD factors, we set each singular value to be equal to the minimum of some chosen constant and the true singular value. It is important to note here that these approximations to the convolutional layers, when done naively, can return convolutions with larger filters. To be precise, an n× n filter will map to a k × k filter through our rank modifications, where k ≥ n. We follow the method in Sedghi et al. (2018), where these filters are pruned back down by only using n× n entries in the output. When naturally training ResNet-18 and Skipless ResNet-18 models, we train with a batch size of 128 for 200 epochs with the learning rate initiated to 0.01 and decreasing by a factor of 10 at epochs 100, 150, 175, and 190 (for both CIFAR-10 and CIFAR-100). When adversarially training these two models on CIFAR-10 data, we use the same hyperparameters. However, in order to adversarially train on CIFAR-100, we train ResNet-18 with a batch size of 256 for 300 epochs with an initial learning rate of 0.1 and a decrease by a factor of 10 at epochs 200 and 250. For adversarially training Skipless ResNet-18 on CIFAR-100, we use a batch size of 256 for 350 epochs with an
initial learning rate of 0.1 and a decrease by a factor of 10 at epochs 200, 250, and 300. Adversarial training is done with an `∞ 7-step PGD attack with a step size of 2/255, and = 8/255. For all of the training described above we augment the data with random crops and horizontal flips.
During 15 additional epochs of training we manipulate the rank as follows. RankMin and RankMax protocols are employed periodically in the last 15 epochs taking care to make sure that the loss remains small. For these last epochs, the learning rate starts at 0.001 and decreases by a factor of 10 after the third and fifth epochs of the final 15 epochs. As shown in Table 10, we test the accuracy of each model on clean test data from the corresponding dataset, as well as on adversarial examples generated with 20-step PGD with = 8/255 (with step size equal to 2/255) and = 1/255 (with step size equal to .25/255).
When training multi-layer perceptrons on CIFAR-10, we train for 100 epochs with learning rate initialized to 0.01 and decreasing by a factor of 10 at epochs 60, 80 and 90. Then, we train the network for 8 additional epochs, during which RankMin and RankMax networks undergo rank manipulation. | 1. What are the main assumptions investigated in the paper regarding deep neural networks?
2. What are the weaknesses of the paper's empirical study, particularly in terms of the datasets and architectures used?
3. How does the reviewer assess the significance and timeliness of the work?
4. What additional experiments or analyses would strengthen the paper's conclusions? | Review | Review
In this paper, the authors seek to examine carefully some assumptions investigated in the theory of deep neural networks. The paper attempts to answer the following theoretical assumptions: the existence of local minima in loss landscapes, the relevance of weight decay with small L2-norm solutions, the connection between deep neural networks to kernel-based learning theory, and the generalization ability of networks with low-rank layers.
We think that this work is timely and of significant interest, since theoretical work on deep learning has made significant progress in recent years.
Since this paper seeks to provide an empirical study on the assumptions in deep learning theory, we think that the results are somehow weak as the paper is missing extensive analysis, using several well-known datasets and several deep architectures and settings. For example, only the CIFAR-10 dataset is considered in the paper, and it is not clear whether the obtained results will generalize to other datasets. This also goes to the neural network architecture, as only MLP is considered to answer the assumption about the existence of suboptimal minima, while only ResNet is considered to study the generalization abilities with low-rank layers. We think that this is not enough for a paper that tries to provide an empirical study.
-------
Reply to rebuttal
We thank the authors for taking into consideration our previous comments and suggestions, including going beyond MLP and adding experiments on other datasets. For this reason, we have increased the rating from "Weak Accept" to "Accept". |
ICLR | Title
Truth or backpropaganda? An empirical investigation of deep learning theory
Abstract
We empirically evaluate common assumptions about neural networks that are widely held by practitioners and theorists alike. In this work, we: (1) prove the widespread existence of suboptimal local minima in the loss landscape of neural networks, and we use our theory to find examples; (2) show that small-norm parameters are not optimal for generalization; (3) demonstrate that ResNets do not conform to wide-network theories, such as the neural tangent kernel, and that the interaction between skip connections and batch normalization plays a role; (4) find that rank does not correlate with generalization or robustness in a practical setting.
N/A
We empirically evaluate common assumptions about neural networks that are widely held by practitioners and theorists alike. In this work, we: (1) prove the widespread existence of suboptimal local minima in the loss landscape of neural networks, and we use our theory to find examples; (2) show that small-norm parameters are not optimal for generalization; (3) demonstrate that ResNets do not conform to wide-network theories, such as the neural tangent kernel, and that the interaction between skip connections and batch normalization plays a role; (4) find that rank does not correlate with generalization or robustness in a practical setting.
1 INTRODUCTION
Modern deep learning methods are descendent from such long-studied fields as statistical learning, optimization, and signal processing, all of which were built on mathematically rigorous foundations. In statistical learning, principled kernel methods have vastly improved the performance of SVMs and PCA (Suykens & Vandewalle, 1999; Schölkopf et al., 1997), and boosting theory has enabled weak learners to generate strong classifiers (Schapire, 1990). Optimizers in deep learning are borrowed from the field of convex optimization , where momentum optimizers (Nesterov, 1983) and conjugate gradient methods provably solve ill-conditioned problems with high efficiency (Hestenes & Stiefel, 1952). Deep learning harnesses foundational tools from these mature parent fields.
Despite its rigorous roots, deep learning has driven a wedge between theory and practice. Recent theoretical work has certainly made impressive strides towards understanding optimization and generalization in neural networks. But doing so has required researchers to make strong assumptions and study restricted model classes.
In this paper, we seek to understand whether deep learning theories accurately capture the behaviors and network properties that make realistic deep networks work. Following a line of previous work, such as Swirszcz et al. (2016), Zhang et al. (2016), Balduzzi et al. (2017) and Santurkar et al. (2018), we put the assumptions and conclusions of deep learning theory to the test using experiments with both toy networks and realistic ones. We focus on the following important theoretical issues:
∗Authors contributed equally.
• Local minima: Numerous theoretical works argue that all local minima of neural loss functions are globally optimal or that all local minima are nearly optimal. In practice, we find highly suboptimal local minima in realistic neural loss functions, and we discuss reasons why suboptimal local minima exist in the loss surfaces of deep neural networks in general.
• Weight decay and parameter norms: Research inspired by Tikhonov regularization suggests that low-norm minima generalize better, and for many, this is an intuitive justification for simple regularizers like weight decay. Yet for neural networks, it is not at all clear which form of `2-regularization is optimal. We show this by constructing a simple alternative: biasing solutions toward a non-zero norm still works and can even measurably improve performance for modern architectures.
• Neural tangent kernels and the wide-network limit: We investigate theoretical results concerning neural tangent kernels of realistic architectures. While stochastic sampling of the tangent kernels suggests that theoretical results on tangent kernels of multi-layer networks may apply to some multi-layer networks and basic convolutional architectures, the predictions from theory do not hold for practical networks, and the trend even reverses for ResNet architectures. We show that the combination of skip connections and batch normalization is critical for this trend in ResNets.
• Rank: Generalization theory has provided guarantees for the performance of low-rank networks. However, we find that regularization which encourages high-rank weight matrices often outperforms that which promotes low-rank matrices. This indicates that low-rank structure is not a significant force behind generalization in practical networks. We further investigate the adversarial robustness of low-rank networks, which are thought to be more resilient to attack, and we find empirically that their robustness is often lower than the baseline or even a purposefully constructed high-rank network.
2 LOCAL MINIMA IN LOSS LANDSCAPES: DO SUBOPTIMAL MINIMA EXIST?
It is generally accepted that “in practice, poor local minima are rarely a problem with large networks.” (LeCun et al., 2015). However, exact theoretical guarantees for this statement are elusive. Various theoretical studies of local minima have investigated spin-glass models (Choromanska et al., 2014), deep linear models (Laurent & Brecht, 2018; Kawaguchi, 2016), parallel subnetworks (Haeffele & Vidal, 2017), and dense fully connected models (Nguyen et al., 2018) and have shown that either all local minima are global or all have a small optimality gap. The apparent scarcity of poor local minima has lead practitioners to develop the intuition that bad local minima (“bad” meaning high loss value and suboptimal training performance) are practically non-existent.
To further muddy the waters, some theoretical works prove the existence of local minima. Such results exist for simple fully connected architectures (Swirszcz et al., 2016), single-layer networks (Liang et al., 2018; Yun et al., 2018), and two-layer ReLU networks (Safran & Shamir, 2017). For example, (Yun et al., 2019) show that local minima exist in single-layer networks with univariate output and unique datapoints. The crucial idea here is that all neurons are activated for all datapoints at the suboptimal local minima. Unfortunately, these existing analyses of neural loss landscapes require strong assumptions (e.g. random training data, linear activation functions, fully connected layers, or extremely wide network widths) — so strong, in fact, that it is reasonable to question whether these results have any bearing on practical neural networks or describe the underlying cause of good optimization performance in real-world settings.
In this section, we investigate the existence of suboptimal local minima from a theoretical perspective and an empirical one. If suboptimal local minima exist, they are certainly hard to find by standard methods (otherwise training would not work). Thus, we present simple theoretical results that inform us on how to construct non-trivial suboptimal local minima, concretely generalizing previous constructions, such as those by (Yun et al., 2019). Using experimental methods inspired by theory, we easily find suboptimal local minima in the loss landscapes of a range of classifiers.
Trivial local minima are easy to find in ReLU networks – consider the case where bias values are sufficiently low so that the ReLUs are “dead” (i.e. inputs to ReLUs are strictly negative). Such a point is trivially a local minimum. Below, we make a more subtle observation that multilayer perceptrons (MLPs) must have non-trivial local minima, provided there exists a linear classifer that
performs worse than the neural network (an assumption that holds for virtually any standard benchmark problem). Specifically, we show that MLP loss functions contain local minima where they behave identically to a linear classifier on the same data.
We now define a family of low-rank linear functions which represent an MLP. Let “rank-s affine function” denote an operator of the form G(x) = Ax + b with rank(A) = s. Definition 2.1. Consider a family of functions, {Fφ : Rm → Rn}φ∈RP parameterized by φ.We say this family has rank-s affine expression if for all rank-s affine functions G : Rm → Rn and finite subsets Ω ⊂ Rm, there exists φ with Fφ(x) = G(x), ∀x ∈ Ω. If s = min(n,m) we say that this family has full affine expression.
We investigate a family of L-layer MLPs with ReLU activation functions, {Fφ : Rm → Rn}φ∈Φ, and parameter vectors φ, i.e., φ = (A1,b1, A2,b2, . . . , AL,bL), Fφ(x) = HL(f(HL−1...f(H1(x)))), where f denotes the ReLU activation function and Hi(z) = Aiz + bi. Let Ai ∈ Rni×ni−1 , bi ∈ Rni with n0 = m and nL = n. Lemma 1. Consider a family of L-layer multilayer perceptrons with ReLU activations {Fφ : Rm → Rn}φ∈Φ, and let s = mini ni be the minimum layer width. Such a family has rank-s affine expression.
Proof. The idea of the proof is to use the singular value decomposition of any rank-s affine function to construct the MLP layers and pick a bias large enough for all activations to remain positive. See Appendix A.1.
The ability of MLPs to represent linear networks allows us to derive a theorem which implies that arbitrarily deep MLPs have local minima at which the performance of the underlying model on the training data is equal to that of a (potentially low-rank) linear model. In other words, neural networks inherit the local minima of elementary linear models. Theorem 1. Consider a training set, {(xi, yi)}Ni=1, a family {Fφ}φ of MLPs with s = mini ni being the smallest width. Consider a parameterized affine function GA,b solving
min A,b L(GA,b; {(xi, yi)}Ni=1), subject to rank(A) ≤ s, (1)
for a continuous loss function L. Then, for each local minimum, (A′,b′), of the above training problem, there exists a local minimum, φ′, of the MLP loss L(Fφ; {(xi, yi)}Ni=1) with the property that Fφ′(xi) = GA′,b′(xi) for i = 1, 2, ..., N .
Proof. See appendix A.2.
The proof of the above theorem constructs a network in which all activations of all training examples are positive, generalizing previous constructions of this type such as Yun et al. (2019) to more realistic architectures and settings. Another paper has employed a similar construction concurrently to our own work (He et al., 2020). We do expect that the general problem in expressivity occurs every time the support of the activations coincides for all training examples, as the latter reduces the deep network to an affine linear function (on the training set), which relates to the discussion in Balduzzi et al. (2017). We test this hypothesis below by initializing deep networks with biases of high variance. Remark 2.1 (CNN and more expressive local minima). Note that the above constructions of Lemma 1 and Theorem 1 are not limited to MLPs and could be extended to convolutional neural networks with suitably restricted linear mappings Gφ by using the convolution filters to represent identities and using the bias to avoid any negative activations on the training examples. Moreover, shallower MLPs can similarly be embedded into deeper MLPs recursively by replicating the behavior of each linear layer of the shallow MLP with several layers of the deep MLP. Linear classifiers, or even shallow MLPs, often have higher training loss than more expressive networks. Thus, we can use the idea of Theorem 1 to find various suboptimal local minima in the loss landscapes of neural networks. We confirm this with subsequent experiments.
We find that initializing a network at a point that approximately conforms to Theorem 1 is enough to get trapped in a bad local minimum. We verify this by training a linear classifier on CIFAR-10 with
weight decay, (which has a test accuracy of 40.53%, loss of 1.57, and gradient norm of 0.00375 w.r.t to the logistic regression objective). We then initialize a multilayer network as described in Lemma 1 to approximate this linear classifier and recompute these statistics on the full network (see Table 1). When training with this initialization, the gradient norm drops futher, moving parameters even closer to the linear minimizer. The final training result still yields positive activations for the entire training dataset.
Moreover, any isolated local minimum of a linear network results in many local minima of an MLP Fφ′ , as the weights φ′ constructed in the proof of Theorem 1 can undergo transformations such as scaling, permutation, or even rotation without changing Fφ′ as a function during inference, i.e. Fφ′(x) = Fφ(x) for all x for an infinite set of parameters φ, as soon as F has at least one hidden layer.
While our first experiment initializes a deep MLP at a local minimum it inherited from a linear one to empirically illustrate our findings of Theorem 1, Table 1 also illustrates that similarly bad local minima are obtained when choosing large biases (third row) and choosing biases with large variance (fourth row) as conjectured above. To significantly reduce the bias, however, and still obtain a subpar optimum, we need to rerun the experiment with SGD without momentum, as shown in the last row, reflecting common intuition that momentum is helpful to move away from bad local optima.
Remark 2.2 (Sharpness of sub-optimal local optima). An interesting additional property of minima found using the previously discussed initializations is that they are “sharp”. Proponents of the sharpflat hypothesis for generalization have found that minimizers with poor generalization live in sharp attracting basins with low volume and thus low probability in parameter space (Keskar et al., 2016; Huang et al., 2019), although care has to be taken to correctly measure sharpness (Dinh et al., 2017). Accordingly, we find that the maximum eigenvalue of the Hessian at each suboptimal local minimum is significantly higher than those at near-global minima. For example, the maximum eigenvalue of the initialization by Lemma 1 in Table 1 is estimated as 113, 598.85 after training, whereas that of the default initialization is only around 24.01. While our analysis has focused on sub-par local optima in training instead of global minima with sub-par generalization, both the scarcity of local optima during normal training and the favorable generalization properties of neural networks seem to correlate with their sharpness.
In light of our finding that neural networks trained with unconventional initialization reach suboptimal local minima, we conclude that poor local minima can readily be found with a poor choice of hyperparameters. Suboptimal minima are less scarce than previously believed, and neural networks avoid these because good initializations and stochastic optimizers have been fine-tuned over time. Fortunately, promising theoretical directions may explain good optimization performance while remaining compatible with empirical observations. The approach followed by Du et al. (2019) analyzes the loss trajectory of SGD, showing that it avoids bad minima. While this work assumes (unrealistically) large network widths, this theoretical direction is compatible with empirical studies, such as Goodfellow et al. (2014), showing that the training trajectory of realistic deep networks does not encounter significant local minima.
3 WEIGHT DECAY: ARE SMALL `2-NORM SOLUTIONS BETTER?
Classical learning theory advocates regularization for linear models, such as SVM and linear regression. For SVM, `2 regularization endows linear classifiers with a wide-margin property (Cortes & Vapnik, 1995), and recent work on neural networks has shown that minimum norm neural network interpolators benefit from over-parametrization (Hastie et al., 2019) . Following the long history of explicit parameter norm regularization for linear models, weight decay is used for training nearly all high performance neural networks (He et al., 2015a; Chollet, 2016; Huang et al., 2017; Sandler et al., 2018).
In combination with weight decay, all of these cutting-edge architectures also employ batch normalization after convolutional layers (Ioffe & Szegedy, 2015). With that in mind, van Laarhoven (2017) shows that the regularizing effect of weight decay is counteracted by batch normalization, which removes the effect of shrinking weight matrices. Zhang et al. (2018) argue that the synergistic interaction between weight decay and batch norm arises because weight decay plays a large role in regulating the effective learning rate of networks, since scaling down the weights of convolutional layers amplifies the effect of each optimization step, effectively increasing the learning rate. Thus, weight decay increases the effective learning rate as the regularizer drags the parameters closer and closer towards the origin. The authors also suggest that data augmentation and carefully chosen learning rate schedules are more powerful than explicit regularizers like weight decay.
Other work echos this sentiment and claims that weight decay and dropout have little effect on performance, especially when using data augmentation (Hernández-Garcı́a & König, 2018). Hoffer et al. (2018) further study the relationship between weight decay and batch normalization, and they develop normalization with respect to other norms. Shah et al. (2018) instead suggest that minimum norm solutions may not generalize well in the over-parametrized setting.
We find that the difference between performance of standard network architectures with and without weight decay is often statistically significant, even with a high level of data augmentation, for example, horizontal flips and random crops on CIFAR-10 (see Tables 2 and 3). But is weight decay the most effective form of `2 regularization? Furthermore, is the positive effect of weight decay because the regularizer promotes small norm solutions? We generalize weight decay by biasing the `2 norm of the weight vector towards other values using the following regularizer, which we call norm-bias:
Rµ(φ) = ∣∣∣∣∣ ( P∑ i=1 φ2i ) − µ2 ∣∣∣∣∣ . (2) R0 is equivalent to weight decay, but we find that we can further improve performance by biasing the weights towards higher norms (see Tables 2 and 3). In our experiments on CIFAR-10 and CIFAR-100, networks are trained using weight decay coefficients from their respective original papers. ResNet-18 and DenseNet are trained with µ2 = 2500 and norm-bias coefficient 0.005, and MobileNetV2 is trained with µ2 = 5000 and norm-bias coefficient 0.001. µ is chosen heuristically by first training a model with weight decay, recording the norm of the resulting parameter vector, and setting µ to be slightly higher than that norm in order to avoid norm-bias leading to a lower parameter norm than weight decay. While we find that weight decay improves results over a nonregularized baseline for all three models, we also find that models trained with large norm bias (i.e., large µ) outperform models trained with weight decay.
These results lend weight to the argument that explicit parameter norm regularization is in fact useful for training networks, even deep CNNs with batch normalization and data augmentation. However, the fact that norm-biased networks can outperform networks trained with weight decay suggests that any benefits of weight decay are unlikely to originate from the superiority of small-norm solutions.
To further investigate the effect of weight decay and parameter norm on generalization, we also consider models without batch norm. In this case, weight decay directly penalizes the norm of the linear operators inside a network, since there are no batch norm coefficients to compensate for the effect of shrinking weights. Our goal is to determine whether small-norm solutions are superior in this setting where the norm of the parameter vector is more meaningful.
In our first experiment without batch norm, we experience improved performance training an MLP with norm-bias (see Table 3). In a state-of-the-art setting, we consider ResNet-20 with Fixup initialization, a ResNet variant that removes batch norm and instead uses a sophisticated initialization
that solves the exploding gradient problem (Zhang et al., 2019). We observe that weight decay substantially improves training over SGD with no explicit regularization — in fact, ResNets with this initialization scheme train quite poorly without explicit regularization and data normalization. Still, we find that norm-bias with µ2 = 1000 and norm-bias coefficient 0.0005 achieves better results than weight decay (see Table 3). This once again refutes the theory that small-norm parameters generalize better and brings into doubt any relationship between classical Tikhonov regularization and weight decay in neural networks. See Appendix A.5 for a discussion concerning the final parameter norms of Fixup networks as well as additional experiments on CIFAR-100, a harder image classification dataset.
4 KERNEL THEORY AND THE INFINITE-WIDTH LIMIT
In light of the recent surge of works discussing the properties of neural networks in the infinitewidth limit, in particular, connections between infinite-width deep neural networks and Gaussian processes, see Lee et al. (2017), several interesting theoretical works have appeared. The wide network limit and Gaussian process interpretations have inspired work on the neural tangent kernel (Jacot et al., 2018), while Lee et al. (2019) and Bietti et al. (2018) have used wide network assumptions to analyze the training dynamics of deep networks. The connection of deep neural networks to kernel-based learning theory seems promising, but how closely do current architectures match the predictions made for simple networks in the large-width limit?
We focus on the Neural Tangent Kernel (NTK), developed in Jacot et al. (2018). Theory dictates that, in the wide-network limit, the neural tangent kernel remains nearly constant as a network trains. Furthermore, neural network training dynamics can be described as gradient descent on a convex functional, provided the NTK remains nearly constant during training (Lee et al., 2019). In this section, we experimentally test the validity of these theoretical assumptions.
Fixing a network architecture, we use F to denote the function space parametrized by φ ∈ Rp. For the mapping F : RP → F , the NTK is defined by
Φ(φ) = P∑ p=1 ∂φpF (φ)⊗ ∂φpF (φ), (3)
where the derivatives ∂φpF (φ) are evaluated at a particular choice of φ describing a neural network. The NTK can be thought of as a similarity measure between images; given any two images as input, the NTK returns an n × n matrix, where n is the dimensionality of the feature embedding of the neural network. We sample entries from the NTK by drawing a set ofN images {xi} from a dataset,
and computing the entries in the NTK corresponding to all pairs of images in our image set. We do this for a random neural network f : Rm → Rn and computing the tensor Φ(φ) ∈ RN×N×n×n of all pairwise realizations, restricted to the given data:
Φ(φ)ijkl = P∑ p=1 ∂φpf(xi, φ)k · ∂φpf(xj , φ)l (4)
By evaluating Equation 4 using automatic differentiation, we compute slices from the NTK before and after training for a large range of architectures and network widths. We consider image classification on CIFAR-10 and compare a two-layer MLP, a four-layer MLP, a simple 5-layer ConvNet, and a ResNet. We draw 25 random images from CIFAR-10 to sample the NTK before and after training. We measure the change in the NTK by computing the correlation coefficient of the (vectorized) NTK before and after training. We do this for many network widths, and see what happens in the wide network limit. For MLPs we increase the width of the hidden layers, for the ConvNet (6-Layer, Convolutions, ReLU, MaxPooling), we increase the number of convolutional filters, for the ResNet we consider the WideResnet (Zagoruyko & Komodakis, 2016) architecture, where we increase its width parameter. We initialize all models with uniform He initialization as discussed in He et al. (2015b), departing from specific Gaussian initializations in theoretical works to analyze the effects for modern architectures and methodologies.
The results are visualized in Figure 1, where we plot parameters of the NTK for these different architectures, showing how the number of parameters impacts the relative change in the NTK (||Φ1 − Φ0||/||Φ0||, where Φ0/Φ1 denotes the sub-sampled NTK before/after training) and correlation coefficient (Cov(Φ1,Φ0)/σ(Φ1)/σ(Φ0)). Jacot et al. (2018) predicts that the NTK should change very little during training in the infinite-width limit.
At first glance, it might seem that these expectations are hardly met for our (non-infinite) experiments. Figure 1a and Figure 1c show that the relative change in the NTK during training (and also
the magnitude of the NTK) is rapidly increasing with width and remains large in magnitude for a whole range of widths of convolutional architectures. The MLP architectures do show a trend toward small changes in the NTK, yet convergence to zero is slower in the 4-Layer case than in the 2-Layer case.
However, a closer look shows that almost all of the relative change in the NTK seen in Figure 1c is explained by a simple linear re-scaling of the NTK. It should be noted that the scaling of the NTK is strongly effected by the magnitude of parameters at initialization. Within the NTK theory of Lee et al. (2017), a linear rescaling of the NTK during training corresponds simply to a change in learning rate, and so it makes more sense to measure similarity using a scale-invariant metric.
Measuring similarity between sub-sampled NTKs using the scale-invariant correlation coefficient, as in Figure 1b, is more promising. Surprisingly, we find that, as predicted in Jacot et al. (2018), the NTK changes very little (beyond a linear rescaling) for the wide ConvNet architectures. For the dense networks, the predicted trend toward small changes in the NTK also holds for most of the evaluated widths, although there is a dropoff at the end which may be an artifact of the difficulty of training these wide networks on CIFAR-10. For the Wide Residual Neural Networks, however, the general trend toward higher correlation in the wide network limit is completely reversed. The correlation coefficient decreases as network width increases, suggesting that the neural tangent kernel at initialization and after training becomes qualitatively more different as network width increases. The reversal of the correlation trend seems to be a property which emerges from the interaction of batch normalization and skip connections. Removing either of these features from the architecture leads to networks which have an almost constant correlation coefficient for a wide range of network widths, see Figure 6 in the appendix, calling for the consideration of both properties in new formulations of the NTK.
In conclusion, we see that although the NTK trends towards stability as the width of simple architectures increases, the opposite holds for the highly performant Wide ResNet architecture. Even further, neither the removal of batch normalization or the removal of skip connections fully recover the positive NTK trend. While we have hope that kernel-based theories of neural networks may yield guarantees for realistic (albeit wide) models in the future, current results do not sufficiently describe state-of-the-art architectures. Moreover, the already good behavior of models with unstable NTKs is an indicator that good optimization and generalization behaviors do not fundamentally hinge on the stability of the NTK.
5 RANK: DO NETWORKS WITH LOW-RANK LAYERS GENERALIZE BETTER?
State-of-the-art neural networks are highly over-parameterized, and their large number of parameters is a problem both for learning theory and for practical use. In the theoretical setting, rank has been used to tighten bounds on the generalization gap of neural networks. Generalization bounds from Harvey et al. (2017) are improved under conditions of low rank and high sparsity (Neyshabur et al., 2017) of parameter matrices, and the compressibility of low-rank matrices (and other lowdimensional structure) can be directly exploited to provide even stronger bounds (Arora et al., 2018). Further studies show a tendency of stochastic gradient methods to find low-rank solutions (Ji & Telgarsky, 2018). The tendency of SGD to find low-rank operators, in conjunction with results showing generalization bounds for low-rank operators, might suggest that the low-rank nature of these operators is important for generalization.
Langenberg et al. (2019) claim that low-rank networks, in addition to generalizing well to test data, are more robust to adversarial attacks. Theoretical and empirical results from the aforementioned paper lead the authors to make two major claims. First, the authors claim that networks which undergo adversarial training have low-rank and sparse matrices. Second, they claim that networks with low-rank and sparse parameter matrices are more robust to adversarial attacks. We find in our experiments that neither claim holds up in practical settings, including ResNet-18 models trained on CIFAR-10.
We test the generalization and robustness properties of neural networks with low-rank and highrank operators by promoting low-rank or high-rank parameter matrices in late epochs. We employ the regularizer introduced in Sedghi et al. (2018) to create the protocols RankMin, to find low-rank parameters, and RankMax, to find high-rank parameters. RankMin involves fine-tuning a pre-trained
model by replacing linear operators with their low-rank approximations, retraining, and repeating this process. Similarly, RankMax involves fine-tuning a pre-trained model by clipping singular values from the SVD of parameter matrices in order to find high-rank approximations. We are able to manipulate the rank of matrices without strongly affecting the performance of the network. We use both natural training and 7-step projected gradient descent (PGD) adversarial training routines (Madry et al., 2017). The goal of the experiment is to observe how the rank of weight matrices impacts generalization and robustness. We start by attacking naturally trained models with the standard PGD adversarial attack with = 8/255. Then, we move to the adversarial training setting and test the effect of manipulating rank on generalization and on robustness.
In order to compare our results with Langenberg et al. (2019), we borrow the notion of effective rank, denoted by r(W ) for some matrixW . This continuous relaxation of rank is defined as follows. r(W ) = ‖W‖∗‖W‖F where ‖ · ‖∗, ‖ · ‖1, and ‖ · ‖F are the nuclear norm, the 1-norm, and the Frobenius norm, respectively. Note that the singular values of convolution operators can be found quickly with a method from Sedghi et al. (2018), and that method is used here.
In our experiments we investigate two architectures, ResNet-18 and ResNet-18 without skip connections. We train on CIFAR-10 and CIFAR-100, both naturally and adversarially. Table 4 shows that RankMin and RankMax achieve similar generalization on CIFAR-10. More importantly, when adversarially training, a setting when robustness is undeniably the goal, we see the RankMax outperforms both RankMin and standard adversarial training in robust accuracy. Figure 2 confirms that
these two training routines do, in fact, control effective rank. Experiments with CIFAR-100 yield similar results and are presented in Appendix A.7. It is clear that increasing rank using an analogue of rank minimizing algorithms does not harm performance. Moreover, we observe that adversarial robustness does not imply low-rank operators, nor do low-rank operators imply robustness. The findings in Ji & Telgarsky (2018) are corroborated here as the black dots in Figures 2 show that initializations are higher in rank than the trained models. Our investigation into what useful intuition in practical cases can be gained from the theoretical work on the rank of CNNs and from the claims about adversarial robustness reveals that rank plays little to no role in the performance of CNNs in the practical setting of image classification.
6 CONCLUSION
This work highlights the gap between deep learning theory and observations in the real-world setting. We underscore the need to carefully examine the assumptions of theory and to move past the study of toy models, such as deep linear networks or single-layer MLPs, whose traits do not describe those of the practical realm. First, we show that realistic neural networks on realistic learning problems contain suboptimal local minima. Second, we show that low-norm parameters may not be optimal for neural networks, and in fact, biasing parameters to a non-zero norm during training improves performance on several popular datasets and a wide range of networks. Third, we show that the wide-network trends in the neural tangent kernel do not hold for ResNets and that the interaction between skip connections and batch normalization play a large role. Finally, we show that low-rank linear operators and robustness are not correlated, especially for adversarially trained models.
ACKNOWLEDGMENTS
This work was supported by the AFOSR MURI Program, the National Science Foundation DMS directorate, and also the DARPA YFA and L2M programs. Additional funding was provided by the Sloan Foundation.
A APPENDIX
A.1 PROOF OF LEMMA 1
Lemma 1. Consider a family of L-layer multilayer perceptrons with ReLU activations {Fφ : Rm → Rn} and let s = mini ni be the minimum layer width. Then this family has rank-s affine expression.
Proof. Let G be a rank-s affine function, and Ω ⊂ Rm be a finite set. Let G(x) = Ax + b with A = UΣV being the singular value decomposition of A with U ∈ Rn×s and V ∈ Rs×m. We define
A1 = [ ΣV 0 ] where 0 is a (possibly void) (n1 − s) × m matrix of all zeros, and b1 = c1 for c = maxxi∈Ω,1≤j≤n1 |(A1xi)j | + 1 and 1 ∈ Rn1 being a vector of all ones. We further choose Al ∈ Rnl×nl−1 to have an s × s identity matrix in the upper left, and fill all other entries with zeros. This choice is possible since nl ≥ s for all l. We define bl = [0 c 1]
T ∈ Rnl where 0 ∈ R1×s is a vector of all zeros and 1 ∈ R1×(nl−s) is a (possibly void) vector of all ones. Finally, we choose AL = [U 0], where now 0 is a (possibly void) n × (nL−1 − s) matrix of all zeros, and bL = −cAL1 + b for 1 ∈ RnL−1 being a vector of all ones. Then one readily checks that Fφ(x) = G(x) holds for all x ∈ Ω. Note that all entries of all activations are greater or equal to c > 0, such that no ReLU ever maps an entry to zero.
A.2 PROOF OF THEOREM 1
Theorem 1. Consider a training set, {(xi, yi)}Ni=1, a family {Fφ} of MLPs with s = mini ni being the smallest width. Consider the training of a rank-s linear classifier GA,b, i.e.,
min A,b L(GA,b; {(xi, yi)}Ni=1), subject to rank(A) ≤ s, (5)
for any continuous loss function L. Then for each local minimum, (A′,b′), of the above training problem, there exists a local minimum, φ′, of L(Fφ; {(xi, yi)}Ni=1) with the property that Fφ′(xi) = GA′,b′(xi) for i = 1, 2, ..., N .
Proof. Based on the definition of a local minimium, there exists an open ball D around (A′,b′) such that
L(GA′,b′ ; {(xi, yi)}Ni=1) ≤ L(GA,b; {(xi, yi)}Ni=1) ∀(A,b) ∈ D with rank(A) ≤ s. (6)
First, we use the same construction as in the proof of Lemma 1 to find a function Fφ′ with Fφ′(xi) = GA′,b′(xi) for all training example xi. Because the mapping φ 7→ Fφ(xi) is continuous (not only for the entire network F but also for all subnetworks), and because all activations of Fφ′ are greater or equal to c > 0, there exists an open ballB(φ′, δ1) around φ′ such that the activations of Fφ remain positive for all xi and all φ ∈ B(φ′, δ1). Consequently, the restriction of Fφ to the training set remains affine linear for φ ∈ B(φ′, δ1). In other words, for any φ ∈ B(φ′, δ1) we can write
Fφ(xi) = A(φ)xi + b(φ) ∀xi, by defining A(φ) = ALAL−1 . . . A1 and b(φ) = ∑L l=1ALAL−1 . . . Al+1bl. Note that due to s = mini ni, the resulting A(φ) satisfies rank(A(φ)) ≤ s. After restricting φ to an open ball B(φ′, δ2), for δ2 ≤ δ1 sufficiently small, the above (A(φ),b(φ)) satisfy (A(φ),b(φ)) ∈ D for all φ ∈ B(φ′, δ2). On this set, we, however, already know that the loss can only be greater or equal to L(Fφ′ ; {(xi, yi)}Ni=1) due to equation 6. Thus, φ′ is a local minimum of the underlying loss function.
A.3 ADDITIONAL COMMENTS REGARDING THEOREM 1
Note that our theoretical and experimental results do not contradict theoretical guarantees for deep linear networks (Kawaguchi, 2016; Laurent & Brecht, 2018) which show that all local minima are global. A deep linear network with s = min(n,m) is equivalent to a linear classifier, and in this case, the local minima constructed by Theorem 1 are global. However, this observation shows that Theorem 1 characterizes the gap between deep linear and deep nonlinear networks; the global minima predicted by linear network theories are inherited as (usually suboptimal) local minima when ReLU’s are added. Thus, linear networks do not accurately describe the distribution of minima in non-linear networks.
A.4 ADDITIONAL RESULTS FOR SUBOPTIMAL LOCAL OPTIMA
Table 5 shows more experiments. As above in the previous experiment, we use gradient descent to train a full ResNet-18 architecture on CIFAR-10 until convergence from different initializations. We find that essentially the same results appear for the deeper architecture, initializing with very high bias leads to highly non-optimal solutions. In this case even solutions that are equally bad as a zero-norm initialization.
Further results on CIFAR-100 are shown in Tables 6 and 7. These experiments with MLP and ResNet-18 show the same trends as explained above, thus confirming that the results are not specific to the CIFAR-10 dataset.
A.5 DETAILS CONCERNING LOW-NORM REGULARIZATION EXPERIMENTS
Our experiments comparing regularizers all run for 300 epochs with an initial learning rate of 0.1 and decreases by a factor of 10 at epochs 100, 175, 225, and 275. We use the SGD optimizer with momentum 0.9.
We also tried negative weight decay coefficients, which leads to ResNet-18 CIFAR-10 performance above 90% while blowing up parameter norm, but this performance is still suboptimal and is not informative concerning the optimality of minimum norm solutions. One might wonder if high norm-bias coefficients lead to even lower parameter norm than low weight decay coefficients. This
question may not be meaningful in the case of networks with batch normalization. In the case of ResNet-20 with Fixup, which does not contain running mean and standard deviation, the average parameter `2 norm after training with weight decay is 24.51 while that of models trained with normbias is 31.62. Below, we perform the same tests on CIFAR-100, a substantially more difficult dataset. Weight decay coefficients are chosen to be ones used in the original paper for the corresponding architecture. Norm-bias coefficient/µ2 is chosen to be 8100/0.005, 7500/0.001, and 2000/0.0005 for ResNet-18, DenseNet-40, and ResNet-20 with Fixup, respectively, using the same heuristic as described in the main body.
A.6 DETAILS ON THE NEURAL TANGENT KERNEL EXPERIMENT
For further reference, we include details on the NTK sampling during training epochs in Figure 3. We see that the parameter norm (Right) behaves normally (all of these experiments are trained with a standard weight decay parameter of 0.0005), yet the NTK norm (Left) rapidly increases. Most of this increase, however is scaling of the kernel, as the correlation plot (Middle) is much less drastic. We do see that most change happens in the very first epochs of training, whereas the kernel only changes slowly later on.
A.7 DETAILS ON RANKMIN AND RANKMAX
We employ routines to promote both low-rank and high-rank parameter matrices. We do this by computing approximations to the linear operators at each layer. Since convolutional layers are linear operations, we know that there is a matrix whose dimensions are the number of parameters in the input to the convolution and the number of parameters in the output of the convolution. In order to compute low-rank approximations of these operators, one could write down the matrix corresponding to the convolution, and then compute a low-rank approximation using a singular value decomposition (SVD). In order to make this problem computationally tractable we used the method for computing singular values of convolution operators derived in Sedghi et al. (2018). We were then able to do low-rank approximation in the classical sense, by setting each singular value below some threshold to zero. In order to compute high-rank operators, we clipped the singular values so that when mulitplying the SVD factors, we set each singular value to be equal to the minimum of some chosen constant and the true singular value. It is important to note here that these approximations to the convolutional layers, when done naively, can return convolutions with larger filters. To be precise, an n× n filter will map to a k × k filter through our rank modifications, where k ≥ n. We follow the method in Sedghi et al. (2018), where these filters are pruned back down by only using n× n entries in the output. When naturally training ResNet-18 and Skipless ResNet-18 models, we train with a batch size of 128 for 200 epochs with the learning rate initiated to 0.01 and decreasing by a factor of 10 at epochs 100, 150, 175, and 190 (for both CIFAR-10 and CIFAR-100). When adversarially training these two models on CIFAR-10 data, we use the same hyperparameters. However, in order to adversarially train on CIFAR-100, we train ResNet-18 with a batch size of 256 for 300 epochs with an initial learning rate of 0.1 and a decrease by a factor of 10 at epochs 200 and 250. For adversarially training Skipless ResNet-18 on CIFAR-100, we use a batch size of 256 for 350 epochs with an
initial learning rate of 0.1 and a decrease by a factor of 10 at epochs 200, 250, and 300. Adversarial training is done with an `∞ 7-step PGD attack with a step size of 2/255, and = 8/255. For all of the training described above we augment the data with random crops and horizontal flips.
During 15 additional epochs of training we manipulate the rank as follows. RankMin and RankMax protocols are employed periodically in the last 15 epochs taking care to make sure that the loss remains small. For these last epochs, the learning rate starts at 0.001 and decreases by a factor of 10 after the third and fifth epochs of the final 15 epochs. As shown in Table 10, we test the accuracy of each model on clean test data from the corresponding dataset, as well as on adversarial examples generated with 20-step PGD with = 8/255 (with step size equal to 2/255) and = 1/255 (with step size equal to .25/255).
When training multi-layer perceptrons on CIFAR-10, we train for 100 epochs with learning rate initialized to 0.01 and decreasing by a factor of 10 at epochs 60, 80 and 90. Then, we train the network for 8 additional epochs, during which RankMin and RankMax networks undergo rank manipulation. | 1. What are the main contributions of the paper to the field of deep learning?
2. What are the strengths of the paper, particularly in terms of its analytical and experimental insights?
3. Are there any concerns or limitations regarding the paper's findings or methodology?
4. How does the reviewer assess the significance and impact of the paper's discoveries on the development of deep neural networks?
5. What additional research or experiments would the reviewer like to see conducted to further validate or expand upon the paper's conclusions? | Review | Review
The authors seek to challenge some presumptions about training deep neural networks, such as the robustness of low rank linear layers and the existence of suboptimal local minima. They provide analytical insight as well as a few experiments.
I give this paper an accept. They analytically explore four relevant topics of deep learning, and provide experimental insight. In particular, they provide solid analytical reasoning behind their claims that suboptimal local minima exist and that their lack of prevalence is due to improvements in other aspects of deep networks, such as initialization and optimizers. In addition, they present a norm-bias regularizer generalization that consistently increases accuracy. I am especially pleased with this, as the results are averaged over several runs (a practice that seems to be not so widespread these days).
If I were to have one thing on my wish list for this paper, it would be the small issue of having some multiple experiment version of the local minima experiments (I understand why it is not all that necessary for the rank and stability experiments).
Nevertheless, I think this paper gives useful insight as to the behavior of deep neural networks that can help advance the field on a foundational level. |
ICLR | Title
Truth or backpropaganda? An empirical investigation of deep learning theory
Abstract
We empirically evaluate common assumptions about neural networks that are widely held by practitioners and theorists alike. In this work, we: (1) prove the widespread existence of suboptimal local minima in the loss landscape of neural networks, and we use our theory to find examples; (2) show that small-norm parameters are not optimal for generalization; (3) demonstrate that ResNets do not conform to wide-network theories, such as the neural tangent kernel, and that the interaction between skip connections and batch normalization plays a role; (4) find that rank does not correlate with generalization or robustness in a practical setting.
N/A
We empirically evaluate common assumptions about neural networks that are widely held by practitioners and theorists alike. In this work, we: (1) prove the widespread existence of suboptimal local minima in the loss landscape of neural networks, and we use our theory to find examples; (2) show that small-norm parameters are not optimal for generalization; (3) demonstrate that ResNets do not conform to wide-network theories, such as the neural tangent kernel, and that the interaction between skip connections and batch normalization plays a role; (4) find that rank does not correlate with generalization or robustness in a practical setting.
1 INTRODUCTION
Modern deep learning methods are descendent from such long-studied fields as statistical learning, optimization, and signal processing, all of which were built on mathematically rigorous foundations. In statistical learning, principled kernel methods have vastly improved the performance of SVMs and PCA (Suykens & Vandewalle, 1999; Schölkopf et al., 1997), and boosting theory has enabled weak learners to generate strong classifiers (Schapire, 1990). Optimizers in deep learning are borrowed from the field of convex optimization , where momentum optimizers (Nesterov, 1983) and conjugate gradient methods provably solve ill-conditioned problems with high efficiency (Hestenes & Stiefel, 1952). Deep learning harnesses foundational tools from these mature parent fields.
Despite its rigorous roots, deep learning has driven a wedge between theory and practice. Recent theoretical work has certainly made impressive strides towards understanding optimization and generalization in neural networks. But doing so has required researchers to make strong assumptions and study restricted model classes.
In this paper, we seek to understand whether deep learning theories accurately capture the behaviors and network properties that make realistic deep networks work. Following a line of previous work, such as Swirszcz et al. (2016), Zhang et al. (2016), Balduzzi et al. (2017) and Santurkar et al. (2018), we put the assumptions and conclusions of deep learning theory to the test using experiments with both toy networks and realistic ones. We focus on the following important theoretical issues:
∗Authors contributed equally.
• Local minima: Numerous theoretical works argue that all local minima of neural loss functions are globally optimal or that all local minima are nearly optimal. In practice, we find highly suboptimal local minima in realistic neural loss functions, and we discuss reasons why suboptimal local minima exist in the loss surfaces of deep neural networks in general.
• Weight decay and parameter norms: Research inspired by Tikhonov regularization suggests that low-norm minima generalize better, and for many, this is an intuitive justification for simple regularizers like weight decay. Yet for neural networks, it is not at all clear which form of `2-regularization is optimal. We show this by constructing a simple alternative: biasing solutions toward a non-zero norm still works and can even measurably improve performance for modern architectures.
• Neural tangent kernels and the wide-network limit: We investigate theoretical results concerning neural tangent kernels of realistic architectures. While stochastic sampling of the tangent kernels suggests that theoretical results on tangent kernels of multi-layer networks may apply to some multi-layer networks and basic convolutional architectures, the predictions from theory do not hold for practical networks, and the trend even reverses for ResNet architectures. We show that the combination of skip connections and batch normalization is critical for this trend in ResNets.
• Rank: Generalization theory has provided guarantees for the performance of low-rank networks. However, we find that regularization which encourages high-rank weight matrices often outperforms that which promotes low-rank matrices. This indicates that low-rank structure is not a significant force behind generalization in practical networks. We further investigate the adversarial robustness of low-rank networks, which are thought to be more resilient to attack, and we find empirically that their robustness is often lower than the baseline or even a purposefully constructed high-rank network.
2 LOCAL MINIMA IN LOSS LANDSCAPES: DO SUBOPTIMAL MINIMA EXIST?
It is generally accepted that “in practice, poor local minima are rarely a problem with large networks.” (LeCun et al., 2015). However, exact theoretical guarantees for this statement are elusive. Various theoretical studies of local minima have investigated spin-glass models (Choromanska et al., 2014), deep linear models (Laurent & Brecht, 2018; Kawaguchi, 2016), parallel subnetworks (Haeffele & Vidal, 2017), and dense fully connected models (Nguyen et al., 2018) and have shown that either all local minima are global or all have a small optimality gap. The apparent scarcity of poor local minima has lead practitioners to develop the intuition that bad local minima (“bad” meaning high loss value and suboptimal training performance) are practically non-existent.
To further muddy the waters, some theoretical works prove the existence of local minima. Such results exist for simple fully connected architectures (Swirszcz et al., 2016), single-layer networks (Liang et al., 2018; Yun et al., 2018), and two-layer ReLU networks (Safran & Shamir, 2017). For example, (Yun et al., 2019) show that local minima exist in single-layer networks with univariate output and unique datapoints. The crucial idea here is that all neurons are activated for all datapoints at the suboptimal local minima. Unfortunately, these existing analyses of neural loss landscapes require strong assumptions (e.g. random training data, linear activation functions, fully connected layers, or extremely wide network widths) — so strong, in fact, that it is reasonable to question whether these results have any bearing on practical neural networks or describe the underlying cause of good optimization performance in real-world settings.
In this section, we investigate the existence of suboptimal local minima from a theoretical perspective and an empirical one. If suboptimal local minima exist, they are certainly hard to find by standard methods (otherwise training would not work). Thus, we present simple theoretical results that inform us on how to construct non-trivial suboptimal local minima, concretely generalizing previous constructions, such as those by (Yun et al., 2019). Using experimental methods inspired by theory, we easily find suboptimal local minima in the loss landscapes of a range of classifiers.
Trivial local minima are easy to find in ReLU networks – consider the case where bias values are sufficiently low so that the ReLUs are “dead” (i.e. inputs to ReLUs are strictly negative). Such a point is trivially a local minimum. Below, we make a more subtle observation that multilayer perceptrons (MLPs) must have non-trivial local minima, provided there exists a linear classifer that
performs worse than the neural network (an assumption that holds for virtually any standard benchmark problem). Specifically, we show that MLP loss functions contain local minima where they behave identically to a linear classifier on the same data.
We now define a family of low-rank linear functions which represent an MLP. Let “rank-s affine function” denote an operator of the form G(x) = Ax + b with rank(A) = s. Definition 2.1. Consider a family of functions, {Fφ : Rm → Rn}φ∈RP parameterized by φ.We say this family has rank-s affine expression if for all rank-s affine functions G : Rm → Rn and finite subsets Ω ⊂ Rm, there exists φ with Fφ(x) = G(x), ∀x ∈ Ω. If s = min(n,m) we say that this family has full affine expression.
We investigate a family of L-layer MLPs with ReLU activation functions, {Fφ : Rm → Rn}φ∈Φ, and parameter vectors φ, i.e., φ = (A1,b1, A2,b2, . . . , AL,bL), Fφ(x) = HL(f(HL−1...f(H1(x)))), where f denotes the ReLU activation function and Hi(z) = Aiz + bi. Let Ai ∈ Rni×ni−1 , bi ∈ Rni with n0 = m and nL = n. Lemma 1. Consider a family of L-layer multilayer perceptrons with ReLU activations {Fφ : Rm → Rn}φ∈Φ, and let s = mini ni be the minimum layer width. Such a family has rank-s affine expression.
Proof. The idea of the proof is to use the singular value decomposition of any rank-s affine function to construct the MLP layers and pick a bias large enough for all activations to remain positive. See Appendix A.1.
The ability of MLPs to represent linear networks allows us to derive a theorem which implies that arbitrarily deep MLPs have local minima at which the performance of the underlying model on the training data is equal to that of a (potentially low-rank) linear model. In other words, neural networks inherit the local minima of elementary linear models. Theorem 1. Consider a training set, {(xi, yi)}Ni=1, a family {Fφ}φ of MLPs with s = mini ni being the smallest width. Consider a parameterized affine function GA,b solving
min A,b L(GA,b; {(xi, yi)}Ni=1), subject to rank(A) ≤ s, (1)
for a continuous loss function L. Then, for each local minimum, (A′,b′), of the above training problem, there exists a local minimum, φ′, of the MLP loss L(Fφ; {(xi, yi)}Ni=1) with the property that Fφ′(xi) = GA′,b′(xi) for i = 1, 2, ..., N .
Proof. See appendix A.2.
The proof of the above theorem constructs a network in which all activations of all training examples are positive, generalizing previous constructions of this type such as Yun et al. (2019) to more realistic architectures and settings. Another paper has employed a similar construction concurrently to our own work (He et al., 2020). We do expect that the general problem in expressivity occurs every time the support of the activations coincides for all training examples, as the latter reduces the deep network to an affine linear function (on the training set), which relates to the discussion in Balduzzi et al. (2017). We test this hypothesis below by initializing deep networks with biases of high variance. Remark 2.1 (CNN and more expressive local minima). Note that the above constructions of Lemma 1 and Theorem 1 are not limited to MLPs and could be extended to convolutional neural networks with suitably restricted linear mappings Gφ by using the convolution filters to represent identities and using the bias to avoid any negative activations on the training examples. Moreover, shallower MLPs can similarly be embedded into deeper MLPs recursively by replicating the behavior of each linear layer of the shallow MLP with several layers of the deep MLP. Linear classifiers, or even shallow MLPs, often have higher training loss than more expressive networks. Thus, we can use the idea of Theorem 1 to find various suboptimal local minima in the loss landscapes of neural networks. We confirm this with subsequent experiments.
We find that initializing a network at a point that approximately conforms to Theorem 1 is enough to get trapped in a bad local minimum. We verify this by training a linear classifier on CIFAR-10 with
weight decay, (which has a test accuracy of 40.53%, loss of 1.57, and gradient norm of 0.00375 w.r.t to the logistic regression objective). We then initialize a multilayer network as described in Lemma 1 to approximate this linear classifier and recompute these statistics on the full network (see Table 1). When training with this initialization, the gradient norm drops futher, moving parameters even closer to the linear minimizer. The final training result still yields positive activations for the entire training dataset.
Moreover, any isolated local minimum of a linear network results in many local minima of an MLP Fφ′ , as the weights φ′ constructed in the proof of Theorem 1 can undergo transformations such as scaling, permutation, or even rotation without changing Fφ′ as a function during inference, i.e. Fφ′(x) = Fφ(x) for all x for an infinite set of parameters φ, as soon as F has at least one hidden layer.
While our first experiment initializes a deep MLP at a local minimum it inherited from a linear one to empirically illustrate our findings of Theorem 1, Table 1 also illustrates that similarly bad local minima are obtained when choosing large biases (third row) and choosing biases with large variance (fourth row) as conjectured above. To significantly reduce the bias, however, and still obtain a subpar optimum, we need to rerun the experiment with SGD without momentum, as shown in the last row, reflecting common intuition that momentum is helpful to move away from bad local optima.
Remark 2.2 (Sharpness of sub-optimal local optima). An interesting additional property of minima found using the previously discussed initializations is that they are “sharp”. Proponents of the sharpflat hypothesis for generalization have found that minimizers with poor generalization live in sharp attracting basins with low volume and thus low probability in parameter space (Keskar et al., 2016; Huang et al., 2019), although care has to be taken to correctly measure sharpness (Dinh et al., 2017). Accordingly, we find that the maximum eigenvalue of the Hessian at each suboptimal local minimum is significantly higher than those at near-global minima. For example, the maximum eigenvalue of the initialization by Lemma 1 in Table 1 is estimated as 113, 598.85 after training, whereas that of the default initialization is only around 24.01. While our analysis has focused on sub-par local optima in training instead of global minima with sub-par generalization, both the scarcity of local optima during normal training and the favorable generalization properties of neural networks seem to correlate with their sharpness.
In light of our finding that neural networks trained with unconventional initialization reach suboptimal local minima, we conclude that poor local minima can readily be found with a poor choice of hyperparameters. Suboptimal minima are less scarce than previously believed, and neural networks avoid these because good initializations and stochastic optimizers have been fine-tuned over time. Fortunately, promising theoretical directions may explain good optimization performance while remaining compatible with empirical observations. The approach followed by Du et al. (2019) analyzes the loss trajectory of SGD, showing that it avoids bad minima. While this work assumes (unrealistically) large network widths, this theoretical direction is compatible with empirical studies, such as Goodfellow et al. (2014), showing that the training trajectory of realistic deep networks does not encounter significant local minima.
3 WEIGHT DECAY: ARE SMALL `2-NORM SOLUTIONS BETTER?
Classical learning theory advocates regularization for linear models, such as SVM and linear regression. For SVM, `2 regularization endows linear classifiers with a wide-margin property (Cortes & Vapnik, 1995), and recent work on neural networks has shown that minimum norm neural network interpolators benefit from over-parametrization (Hastie et al., 2019) . Following the long history of explicit parameter norm regularization for linear models, weight decay is used for training nearly all high performance neural networks (He et al., 2015a; Chollet, 2016; Huang et al., 2017; Sandler et al., 2018).
In combination with weight decay, all of these cutting-edge architectures also employ batch normalization after convolutional layers (Ioffe & Szegedy, 2015). With that in mind, van Laarhoven (2017) shows that the regularizing effect of weight decay is counteracted by batch normalization, which removes the effect of shrinking weight matrices. Zhang et al. (2018) argue that the synergistic interaction between weight decay and batch norm arises because weight decay plays a large role in regulating the effective learning rate of networks, since scaling down the weights of convolutional layers amplifies the effect of each optimization step, effectively increasing the learning rate. Thus, weight decay increases the effective learning rate as the regularizer drags the parameters closer and closer towards the origin. The authors also suggest that data augmentation and carefully chosen learning rate schedules are more powerful than explicit regularizers like weight decay.
Other work echos this sentiment and claims that weight decay and dropout have little effect on performance, especially when using data augmentation (Hernández-Garcı́a & König, 2018). Hoffer et al. (2018) further study the relationship between weight decay and batch normalization, and they develop normalization with respect to other norms. Shah et al. (2018) instead suggest that minimum norm solutions may not generalize well in the over-parametrized setting.
We find that the difference between performance of standard network architectures with and without weight decay is often statistically significant, even with a high level of data augmentation, for example, horizontal flips and random crops on CIFAR-10 (see Tables 2 and 3). But is weight decay the most effective form of `2 regularization? Furthermore, is the positive effect of weight decay because the regularizer promotes small norm solutions? We generalize weight decay by biasing the `2 norm of the weight vector towards other values using the following regularizer, which we call norm-bias:
Rµ(φ) = ∣∣∣∣∣ ( P∑ i=1 φ2i ) − µ2 ∣∣∣∣∣ . (2) R0 is equivalent to weight decay, but we find that we can further improve performance by biasing the weights towards higher norms (see Tables 2 and 3). In our experiments on CIFAR-10 and CIFAR-100, networks are trained using weight decay coefficients from their respective original papers. ResNet-18 and DenseNet are trained with µ2 = 2500 and norm-bias coefficient 0.005, and MobileNetV2 is trained with µ2 = 5000 and norm-bias coefficient 0.001. µ is chosen heuristically by first training a model with weight decay, recording the norm of the resulting parameter vector, and setting µ to be slightly higher than that norm in order to avoid norm-bias leading to a lower parameter norm than weight decay. While we find that weight decay improves results over a nonregularized baseline for all three models, we also find that models trained with large norm bias (i.e., large µ) outperform models trained with weight decay.
These results lend weight to the argument that explicit parameter norm regularization is in fact useful for training networks, even deep CNNs with batch normalization and data augmentation. However, the fact that norm-biased networks can outperform networks trained with weight decay suggests that any benefits of weight decay are unlikely to originate from the superiority of small-norm solutions.
To further investigate the effect of weight decay and parameter norm on generalization, we also consider models without batch norm. In this case, weight decay directly penalizes the norm of the linear operators inside a network, since there are no batch norm coefficients to compensate for the effect of shrinking weights. Our goal is to determine whether small-norm solutions are superior in this setting where the norm of the parameter vector is more meaningful.
In our first experiment without batch norm, we experience improved performance training an MLP with norm-bias (see Table 3). In a state-of-the-art setting, we consider ResNet-20 with Fixup initialization, a ResNet variant that removes batch norm and instead uses a sophisticated initialization
that solves the exploding gradient problem (Zhang et al., 2019). We observe that weight decay substantially improves training over SGD with no explicit regularization — in fact, ResNets with this initialization scheme train quite poorly without explicit regularization and data normalization. Still, we find that norm-bias with µ2 = 1000 and norm-bias coefficient 0.0005 achieves better results than weight decay (see Table 3). This once again refutes the theory that small-norm parameters generalize better and brings into doubt any relationship between classical Tikhonov regularization and weight decay in neural networks. See Appendix A.5 for a discussion concerning the final parameter norms of Fixup networks as well as additional experiments on CIFAR-100, a harder image classification dataset.
4 KERNEL THEORY AND THE INFINITE-WIDTH LIMIT
In light of the recent surge of works discussing the properties of neural networks in the infinitewidth limit, in particular, connections between infinite-width deep neural networks and Gaussian processes, see Lee et al. (2017), several interesting theoretical works have appeared. The wide network limit and Gaussian process interpretations have inspired work on the neural tangent kernel (Jacot et al., 2018), while Lee et al. (2019) and Bietti et al. (2018) have used wide network assumptions to analyze the training dynamics of deep networks. The connection of deep neural networks to kernel-based learning theory seems promising, but how closely do current architectures match the predictions made for simple networks in the large-width limit?
We focus on the Neural Tangent Kernel (NTK), developed in Jacot et al. (2018). Theory dictates that, in the wide-network limit, the neural tangent kernel remains nearly constant as a network trains. Furthermore, neural network training dynamics can be described as gradient descent on a convex functional, provided the NTK remains nearly constant during training (Lee et al., 2019). In this section, we experimentally test the validity of these theoretical assumptions.
Fixing a network architecture, we use F to denote the function space parametrized by φ ∈ Rp. For the mapping F : RP → F , the NTK is defined by
Φ(φ) = P∑ p=1 ∂φpF (φ)⊗ ∂φpF (φ), (3)
where the derivatives ∂φpF (φ) are evaluated at a particular choice of φ describing a neural network. The NTK can be thought of as a similarity measure between images; given any two images as input, the NTK returns an n × n matrix, where n is the dimensionality of the feature embedding of the neural network. We sample entries from the NTK by drawing a set ofN images {xi} from a dataset,
and computing the entries in the NTK corresponding to all pairs of images in our image set. We do this for a random neural network f : Rm → Rn and computing the tensor Φ(φ) ∈ RN×N×n×n of all pairwise realizations, restricted to the given data:
Φ(φ)ijkl = P∑ p=1 ∂φpf(xi, φ)k · ∂φpf(xj , φ)l (4)
By evaluating Equation 4 using automatic differentiation, we compute slices from the NTK before and after training for a large range of architectures and network widths. We consider image classification on CIFAR-10 and compare a two-layer MLP, a four-layer MLP, a simple 5-layer ConvNet, and a ResNet. We draw 25 random images from CIFAR-10 to sample the NTK before and after training. We measure the change in the NTK by computing the correlation coefficient of the (vectorized) NTK before and after training. We do this for many network widths, and see what happens in the wide network limit. For MLPs we increase the width of the hidden layers, for the ConvNet (6-Layer, Convolutions, ReLU, MaxPooling), we increase the number of convolutional filters, for the ResNet we consider the WideResnet (Zagoruyko & Komodakis, 2016) architecture, where we increase its width parameter. We initialize all models with uniform He initialization as discussed in He et al. (2015b), departing from specific Gaussian initializations in theoretical works to analyze the effects for modern architectures and methodologies.
The results are visualized in Figure 1, where we plot parameters of the NTK for these different architectures, showing how the number of parameters impacts the relative change in the NTK (||Φ1 − Φ0||/||Φ0||, where Φ0/Φ1 denotes the sub-sampled NTK before/after training) and correlation coefficient (Cov(Φ1,Φ0)/σ(Φ1)/σ(Φ0)). Jacot et al. (2018) predicts that the NTK should change very little during training in the infinite-width limit.
At first glance, it might seem that these expectations are hardly met for our (non-infinite) experiments. Figure 1a and Figure 1c show that the relative change in the NTK during training (and also
the magnitude of the NTK) is rapidly increasing with width and remains large in magnitude for a whole range of widths of convolutional architectures. The MLP architectures do show a trend toward small changes in the NTK, yet convergence to zero is slower in the 4-Layer case than in the 2-Layer case.
However, a closer look shows that almost all of the relative change in the NTK seen in Figure 1c is explained by a simple linear re-scaling of the NTK. It should be noted that the scaling of the NTK is strongly effected by the magnitude of parameters at initialization. Within the NTK theory of Lee et al. (2017), a linear rescaling of the NTK during training corresponds simply to a change in learning rate, and so it makes more sense to measure similarity using a scale-invariant metric.
Measuring similarity between sub-sampled NTKs using the scale-invariant correlation coefficient, as in Figure 1b, is more promising. Surprisingly, we find that, as predicted in Jacot et al. (2018), the NTK changes very little (beyond a linear rescaling) for the wide ConvNet architectures. For the dense networks, the predicted trend toward small changes in the NTK also holds for most of the evaluated widths, although there is a dropoff at the end which may be an artifact of the difficulty of training these wide networks on CIFAR-10. For the Wide Residual Neural Networks, however, the general trend toward higher correlation in the wide network limit is completely reversed. The correlation coefficient decreases as network width increases, suggesting that the neural tangent kernel at initialization and after training becomes qualitatively more different as network width increases. The reversal of the correlation trend seems to be a property which emerges from the interaction of batch normalization and skip connections. Removing either of these features from the architecture leads to networks which have an almost constant correlation coefficient for a wide range of network widths, see Figure 6 in the appendix, calling for the consideration of both properties in new formulations of the NTK.
In conclusion, we see that although the NTK trends towards stability as the width of simple architectures increases, the opposite holds for the highly performant Wide ResNet architecture. Even further, neither the removal of batch normalization or the removal of skip connections fully recover the positive NTK trend. While we have hope that kernel-based theories of neural networks may yield guarantees for realistic (albeit wide) models in the future, current results do not sufficiently describe state-of-the-art architectures. Moreover, the already good behavior of models with unstable NTKs is an indicator that good optimization and generalization behaviors do not fundamentally hinge on the stability of the NTK.
5 RANK: DO NETWORKS WITH LOW-RANK LAYERS GENERALIZE BETTER?
State-of-the-art neural networks are highly over-parameterized, and their large number of parameters is a problem both for learning theory and for practical use. In the theoretical setting, rank has been used to tighten bounds on the generalization gap of neural networks. Generalization bounds from Harvey et al. (2017) are improved under conditions of low rank and high sparsity (Neyshabur et al., 2017) of parameter matrices, and the compressibility of low-rank matrices (and other lowdimensional structure) can be directly exploited to provide even stronger bounds (Arora et al., 2018). Further studies show a tendency of stochastic gradient methods to find low-rank solutions (Ji & Telgarsky, 2018). The tendency of SGD to find low-rank operators, in conjunction with results showing generalization bounds for low-rank operators, might suggest that the low-rank nature of these operators is important for generalization.
Langenberg et al. (2019) claim that low-rank networks, in addition to generalizing well to test data, are more robust to adversarial attacks. Theoretical and empirical results from the aforementioned paper lead the authors to make two major claims. First, the authors claim that networks which undergo adversarial training have low-rank and sparse matrices. Second, they claim that networks with low-rank and sparse parameter matrices are more robust to adversarial attacks. We find in our experiments that neither claim holds up in practical settings, including ResNet-18 models trained on CIFAR-10.
We test the generalization and robustness properties of neural networks with low-rank and highrank operators by promoting low-rank or high-rank parameter matrices in late epochs. We employ the regularizer introduced in Sedghi et al. (2018) to create the protocols RankMin, to find low-rank parameters, and RankMax, to find high-rank parameters. RankMin involves fine-tuning a pre-trained
model by replacing linear operators with their low-rank approximations, retraining, and repeating this process. Similarly, RankMax involves fine-tuning a pre-trained model by clipping singular values from the SVD of parameter matrices in order to find high-rank approximations. We are able to manipulate the rank of matrices without strongly affecting the performance of the network. We use both natural training and 7-step projected gradient descent (PGD) adversarial training routines (Madry et al., 2017). The goal of the experiment is to observe how the rank of weight matrices impacts generalization and robustness. We start by attacking naturally trained models with the standard PGD adversarial attack with = 8/255. Then, we move to the adversarial training setting and test the effect of manipulating rank on generalization and on robustness.
In order to compare our results with Langenberg et al. (2019), we borrow the notion of effective rank, denoted by r(W ) for some matrixW . This continuous relaxation of rank is defined as follows. r(W ) = ‖W‖∗‖W‖F where ‖ · ‖∗, ‖ · ‖1, and ‖ · ‖F are the nuclear norm, the 1-norm, and the Frobenius norm, respectively. Note that the singular values of convolution operators can be found quickly with a method from Sedghi et al. (2018), and that method is used here.
In our experiments we investigate two architectures, ResNet-18 and ResNet-18 without skip connections. We train on CIFAR-10 and CIFAR-100, both naturally and adversarially. Table 4 shows that RankMin and RankMax achieve similar generalization on CIFAR-10. More importantly, when adversarially training, a setting when robustness is undeniably the goal, we see the RankMax outperforms both RankMin and standard adversarial training in robust accuracy. Figure 2 confirms that
these two training routines do, in fact, control effective rank. Experiments with CIFAR-100 yield similar results and are presented in Appendix A.7. It is clear that increasing rank using an analogue of rank minimizing algorithms does not harm performance. Moreover, we observe that adversarial robustness does not imply low-rank operators, nor do low-rank operators imply robustness. The findings in Ji & Telgarsky (2018) are corroborated here as the black dots in Figures 2 show that initializations are higher in rank than the trained models. Our investigation into what useful intuition in practical cases can be gained from the theoretical work on the rank of CNNs and from the claims about adversarial robustness reveals that rank plays little to no role in the performance of CNNs in the practical setting of image classification.
6 CONCLUSION
This work highlights the gap between deep learning theory and observations in the real-world setting. We underscore the need to carefully examine the assumptions of theory and to move past the study of toy models, such as deep linear networks or single-layer MLPs, whose traits do not describe those of the practical realm. First, we show that realistic neural networks on realistic learning problems contain suboptimal local minima. Second, we show that low-norm parameters may not be optimal for neural networks, and in fact, biasing parameters to a non-zero norm during training improves performance on several popular datasets and a wide range of networks. Third, we show that the wide-network trends in the neural tangent kernel do not hold for ResNets and that the interaction between skip connections and batch normalization play a large role. Finally, we show that low-rank linear operators and robustness are not correlated, especially for adversarially trained models.
ACKNOWLEDGMENTS
This work was supported by the AFOSR MURI Program, the National Science Foundation DMS directorate, and also the DARPA YFA and L2M programs. Additional funding was provided by the Sloan Foundation.
A APPENDIX
A.1 PROOF OF LEMMA 1
Lemma 1. Consider a family of L-layer multilayer perceptrons with ReLU activations {Fφ : Rm → Rn} and let s = mini ni be the minimum layer width. Then this family has rank-s affine expression.
Proof. Let G be a rank-s affine function, and Ω ⊂ Rm be a finite set. Let G(x) = Ax + b with A = UΣV being the singular value decomposition of A with U ∈ Rn×s and V ∈ Rs×m. We define
A1 = [ ΣV 0 ] where 0 is a (possibly void) (n1 − s) × m matrix of all zeros, and b1 = c1 for c = maxxi∈Ω,1≤j≤n1 |(A1xi)j | + 1 and 1 ∈ Rn1 being a vector of all ones. We further choose Al ∈ Rnl×nl−1 to have an s × s identity matrix in the upper left, and fill all other entries with zeros. This choice is possible since nl ≥ s for all l. We define bl = [0 c 1]
T ∈ Rnl where 0 ∈ R1×s is a vector of all zeros and 1 ∈ R1×(nl−s) is a (possibly void) vector of all ones. Finally, we choose AL = [U 0], where now 0 is a (possibly void) n × (nL−1 − s) matrix of all zeros, and bL = −cAL1 + b for 1 ∈ RnL−1 being a vector of all ones. Then one readily checks that Fφ(x) = G(x) holds for all x ∈ Ω. Note that all entries of all activations are greater or equal to c > 0, such that no ReLU ever maps an entry to zero.
A.2 PROOF OF THEOREM 1
Theorem 1. Consider a training set, {(xi, yi)}Ni=1, a family {Fφ} of MLPs with s = mini ni being the smallest width. Consider the training of a rank-s linear classifier GA,b, i.e.,
min A,b L(GA,b; {(xi, yi)}Ni=1), subject to rank(A) ≤ s, (5)
for any continuous loss function L. Then for each local minimum, (A′,b′), of the above training problem, there exists a local minimum, φ′, of L(Fφ; {(xi, yi)}Ni=1) with the property that Fφ′(xi) = GA′,b′(xi) for i = 1, 2, ..., N .
Proof. Based on the definition of a local minimium, there exists an open ball D around (A′,b′) such that
L(GA′,b′ ; {(xi, yi)}Ni=1) ≤ L(GA,b; {(xi, yi)}Ni=1) ∀(A,b) ∈ D with rank(A) ≤ s. (6)
First, we use the same construction as in the proof of Lemma 1 to find a function Fφ′ with Fφ′(xi) = GA′,b′(xi) for all training example xi. Because the mapping φ 7→ Fφ(xi) is continuous (not only for the entire network F but also for all subnetworks), and because all activations of Fφ′ are greater or equal to c > 0, there exists an open ballB(φ′, δ1) around φ′ such that the activations of Fφ remain positive for all xi and all φ ∈ B(φ′, δ1). Consequently, the restriction of Fφ to the training set remains affine linear for φ ∈ B(φ′, δ1). In other words, for any φ ∈ B(φ′, δ1) we can write
Fφ(xi) = A(φ)xi + b(φ) ∀xi, by defining A(φ) = ALAL−1 . . . A1 and b(φ) = ∑L l=1ALAL−1 . . . Al+1bl. Note that due to s = mini ni, the resulting A(φ) satisfies rank(A(φ)) ≤ s. After restricting φ to an open ball B(φ′, δ2), for δ2 ≤ δ1 sufficiently small, the above (A(φ),b(φ)) satisfy (A(φ),b(φ)) ∈ D for all φ ∈ B(φ′, δ2). On this set, we, however, already know that the loss can only be greater or equal to L(Fφ′ ; {(xi, yi)}Ni=1) due to equation 6. Thus, φ′ is a local minimum of the underlying loss function.
A.3 ADDITIONAL COMMENTS REGARDING THEOREM 1
Note that our theoretical and experimental results do not contradict theoretical guarantees for deep linear networks (Kawaguchi, 2016; Laurent & Brecht, 2018) which show that all local minima are global. A deep linear network with s = min(n,m) is equivalent to a linear classifier, and in this case, the local minima constructed by Theorem 1 are global. However, this observation shows that Theorem 1 characterizes the gap between deep linear and deep nonlinear networks; the global minima predicted by linear network theories are inherited as (usually suboptimal) local minima when ReLU’s are added. Thus, linear networks do not accurately describe the distribution of minima in non-linear networks.
A.4 ADDITIONAL RESULTS FOR SUBOPTIMAL LOCAL OPTIMA
Table 5 shows more experiments. As above in the previous experiment, we use gradient descent to train a full ResNet-18 architecture on CIFAR-10 until convergence from different initializations. We find that essentially the same results appear for the deeper architecture, initializing with very high bias leads to highly non-optimal solutions. In this case even solutions that are equally bad as a zero-norm initialization.
Further results on CIFAR-100 are shown in Tables 6 and 7. These experiments with MLP and ResNet-18 show the same trends as explained above, thus confirming that the results are not specific to the CIFAR-10 dataset.
A.5 DETAILS CONCERNING LOW-NORM REGULARIZATION EXPERIMENTS
Our experiments comparing regularizers all run for 300 epochs with an initial learning rate of 0.1 and decreases by a factor of 10 at epochs 100, 175, 225, and 275. We use the SGD optimizer with momentum 0.9.
We also tried negative weight decay coefficients, which leads to ResNet-18 CIFAR-10 performance above 90% while blowing up parameter norm, but this performance is still suboptimal and is not informative concerning the optimality of minimum norm solutions. One might wonder if high norm-bias coefficients lead to even lower parameter norm than low weight decay coefficients. This
question may not be meaningful in the case of networks with batch normalization. In the case of ResNet-20 with Fixup, which does not contain running mean and standard deviation, the average parameter `2 norm after training with weight decay is 24.51 while that of models trained with normbias is 31.62. Below, we perform the same tests on CIFAR-100, a substantially more difficult dataset. Weight decay coefficients are chosen to be ones used in the original paper for the corresponding architecture. Norm-bias coefficient/µ2 is chosen to be 8100/0.005, 7500/0.001, and 2000/0.0005 for ResNet-18, DenseNet-40, and ResNet-20 with Fixup, respectively, using the same heuristic as described in the main body.
A.6 DETAILS ON THE NEURAL TANGENT KERNEL EXPERIMENT
For further reference, we include details on the NTK sampling during training epochs in Figure 3. We see that the parameter norm (Right) behaves normally (all of these experiments are trained with a standard weight decay parameter of 0.0005), yet the NTK norm (Left) rapidly increases. Most of this increase, however is scaling of the kernel, as the correlation plot (Middle) is much less drastic. We do see that most change happens in the very first epochs of training, whereas the kernel only changes slowly later on.
A.7 DETAILS ON RANKMIN AND RANKMAX
We employ routines to promote both low-rank and high-rank parameter matrices. We do this by computing approximations to the linear operators at each layer. Since convolutional layers are linear operations, we know that there is a matrix whose dimensions are the number of parameters in the input to the convolution and the number of parameters in the output of the convolution. In order to compute low-rank approximations of these operators, one could write down the matrix corresponding to the convolution, and then compute a low-rank approximation using a singular value decomposition (SVD). In order to make this problem computationally tractable we used the method for computing singular values of convolution operators derived in Sedghi et al. (2018). We were then able to do low-rank approximation in the classical sense, by setting each singular value below some threshold to zero. In order to compute high-rank operators, we clipped the singular values so that when mulitplying the SVD factors, we set each singular value to be equal to the minimum of some chosen constant and the true singular value. It is important to note here that these approximations to the convolutional layers, when done naively, can return convolutions with larger filters. To be precise, an n× n filter will map to a k × k filter through our rank modifications, where k ≥ n. We follow the method in Sedghi et al. (2018), where these filters are pruned back down by only using n× n entries in the output. When naturally training ResNet-18 and Skipless ResNet-18 models, we train with a batch size of 128 for 200 epochs with the learning rate initiated to 0.01 and decreasing by a factor of 10 at epochs 100, 150, 175, and 190 (for both CIFAR-10 and CIFAR-100). When adversarially training these two models on CIFAR-10 data, we use the same hyperparameters. However, in order to adversarially train on CIFAR-100, we train ResNet-18 with a batch size of 256 for 300 epochs with an initial learning rate of 0.1 and a decrease by a factor of 10 at epochs 200 and 250. For adversarially training Skipless ResNet-18 on CIFAR-100, we use a batch size of 256 for 350 epochs with an
initial learning rate of 0.1 and a decrease by a factor of 10 at epochs 200, 250, and 300. Adversarial training is done with an `∞ 7-step PGD attack with a step size of 2/255, and = 8/255. For all of the training described above we augment the data with random crops and horizontal flips.
During 15 additional epochs of training we manipulate the rank as follows. RankMin and RankMax protocols are employed periodically in the last 15 epochs taking care to make sure that the loss remains small. For these last epochs, the learning rate starts at 0.001 and decreases by a factor of 10 after the third and fifth epochs of the final 15 epochs. As shown in Table 10, we test the accuracy of each model on clean test data from the corresponding dataset, as well as on adversarial examples generated with 20-step PGD with = 8/255 (with step size equal to 2/255) and = 1/255 (with step size equal to .25/255).
When training multi-layer perceptrons on CIFAR-10, we train for 100 epochs with learning rate initialized to 0.01 and decreasing by a factor of 10 at epochs 60, 80 and 90. Then, we train the network for 8 additional epochs, during which RankMin and RankMax networks undergo rank manipulation. | 1. What are the main contributions of the paper regarding the properties of deep neural networks?
2. How does the paper contribute to the understanding of local minima in deep learning, and what are the practical implications of this study?
3. What are the findings of the paper regarding weight decay, and how do they challenge conventional beliefs about weight decay in deep learning?
4. What are the weaknesses of the paper, particularly in its empirical results related to kernel theory?
5. How does the paper challenge the common belief about low rank and its relationship with generalization and robustness against adversarial attacks?
6. Are there any questions or concerns regarding the methodology or presentation of the paper, such as the choice of the constant \mu in the norm-bias? | Review | Review
The authors look at empirical properties of deep neural networks and discuss their connection to past theoretical work on the following issues:
* Local minima: they give an example of setting where bad local minima (far from the global minimum) are obtained. More specifically, they show such minima can be obtained by initializing with large random biases for MLPs with ReLU activation. They also provide a theoretical result that can be used to find a small set of such minima. I believe this is a useful incremental step towards a better understanding of local minima in deep learning, although it is not clear how many practical implications this has. One question that would ideally be answered is: in practical settings, to what degree does bad initialization cause bad performance specifically due to bad minima? (as opposed to, say, slow convergence or bad generalization performance).
* Weight decay: the authors penalize the size of the norm of the weights as it diverges from a constant, as opposed to when it diverges from 0 as is normally done for weight decay. They show that this works as well or better than normal weight decay in a number of settings. This seem to put into question the belief sometimes held that solutions with smaller norms will generalize better.
* Kernel theory: the authors try to reproduce some of the empirical properties predicted in the Neural Tangent Kernel paper (Jacot et al., 2018) in particular by using more realistic architectures. The results, however, do not appear very conclusive. This might be the weakest part of the paper, as it is hard to draw anything conclusive from their empirical results.
* Rank: The authors challenge the common belief that low rank provides better generalization and more robustness towards adversarial attacks. When enforcing a low or high rank weight matrices during training on ResNet-18 trained on CIFAR-10, the two settings have similar performance and are similarly robust to adversarial attacks, showing at least one counter example.
I think overall this is a useful although somewhat incremental paper, that makes progress in the understanding of the behavior of neural networks in practice, and can help guide further theoretical work and the development of new and improved training techniques and initialization regimes for deep learning.
Other comments/notes:
* minor: the order of the last 2 sub topics covered (rank and NTK) is flipped in the introduction, compared to the abstract and the order of the chapters
* in the table confidence intervals are given, it would be nice to have more details on how they are computed, (e.g. +- 1.96 * std error)
* how is the constant \mu in the norm-bias chosen? |
ICLR | Title
Byte-Level Recursive Convolutional Auto-Encoder for Text
Abstract
This article proposes to auto-encode text at byte-level using convolutional networks with a recursive architecture. The motivation is to explore whether it is possible to have scalable and homogeneous text generation at byte-level in a nonsequential fashion through the simple task of auto-encoding. We show that nonsequential text generation from a fixed-length representation is not only possible, but also achieved much better auto-encoding results than recurrent networks. The proposed model is a multi-stage deep convolutional encoder-decoder framework using residual connections (He et al., 2016), containing up to 160 parameterized layers. Each encoder or decoder contains a shared group of modules that consists of either pooling or upsampling layers, making the network recursive in terms of abstraction levels in representation. Results for 6 large-scale paragraph datasets are reported, in 3 languages including Arabic, Chinese and English. Analyses are conducted to study several properties of the proposed model.
N/A
This article proposes to auto-encode text at byte-level using convolutional networks with a recursive architecture. The motivation is to explore whether it is possible to have scalable and homogeneous text generation at byte-level in a nonsequential fashion through the simple task of auto-encoding. We show that nonsequential text generation from a fixed-length representation is not only possible, but also achieved much better auto-encoding results than recurrent networks. The proposed model is a multi-stage deep convolutional encoder-decoder framework using residual connections (He et al., 2016), containing up to 160 parameterized layers. Each encoder or decoder contains a shared group of modules that consists of either pooling or upsampling layers, making the network recursive in terms of abstraction levels in representation. Results for 6 large-scale paragraph datasets are reported, in 3 languages including Arabic, Chinese and English. Analyses are conducted to study several properties of the proposed model.
1 INTRODUCTION
Recently, generating text using convolutional networks (ConvNets) starts to become an alternative to recurrent networks for sequence-tosequence learning (Gehring et al., 2017). The dominant assumption for both these approaches is that texts are generated one word at a time. Such sequential generation process bears the risk of output or gradient vanishing or exploding problem (Bengio et al., 1994), which limits the length of its generated results. Such limitation in scalability prompts us to explore whether non-sequential text generation is possible.
Meanwhile, text processing from lower levels than words – such as characters (Zhang et al., 2015) (Kim et al., 2016) and bytes (Gillick et al., 2016) (Zhang & LeCun, 2017) – is also being explored due to its promise in handling distinct languages in the same fashion. In particular, the work by Zhang & LeCun (2017) shows that simple one-hot encoding on bytes could give the best results for text classification in a variety of languages. The reason is that it achieved the best balance between computational performance and classification accuracy. Inspired by these results, this article explores auto-encoding for text using byte-level convolutional networks that has a recursive structure, as a first step towards low-level and non-sequential text generation.
For the task of text auto-encoding, we should avoid the use of common attention mechanisms like those used in machine translation Bahdanau et al. (2015), because they always provide a direct information path that enables the auto-encoder to directly copy from the input. This diminishes the
purpose of studying the representational ability of different models. Therefore, all models considered in this article would encode to and decode from a fixed-length vector representation.
The paper by Zhang et al. (2017) was an anterior result on using word-level convolutional networks for text auto-encoding. This article differs from it in several key ways of using convolutional networks. First of all, our models work from the level of bytes instead of words, which arguably makes the problem more challenging. Secondly, our network is dynamic with a recursive structure that scales with the length of input text, which by design could avoid trivial solutions for auto-encoding such as the identity function. Thirdly, by using the latest design heuristics such as residual connections (He et al., 2016), our network can scale up to several hundred of layers deep, compared to a static network that contains a few layers.
In this article, several properties of the auto-encoding model are studied. The following is a list.
1. Applying the model to 3 languages – Arabic, Chinese and English – shows that the model can handle all different languages in the same fashion with equally good accuracy.
2. Comparisons with long short-term memory (LSTM) (Hochreiter & Schmidhuber, 1997) show a significant advantage of using convolutional networks for text auto-encoding.
3. We determined that a recursive convolutional decoder like ours can accurately produce the end-of-string byte, despite that the decoding process is non-sequential.
4. By studying the auto-encoding error when the samples contain randomized noisy bytes, we show that the model does not degenerate to the identity function. However, it can neither denoise the input very well.
5. The recursive structure requires a pooling layer. We compared between average pooling, L2 pooling and max-pooling, and determined that max-pooling is the best choice.
6. The advantage of recursion is established by comparison against a static model that does not have shared module groups. This shows that linguistic heuristics such as recursion is useful for designing models for language processing.
7. We also explored models of different sizes by varying the maximum network depth from 40 to 320. The results show that deeper models give better results.
2 BYTE-LEVEL RECURSIVE CONVOLUTIONAL AUTO-ENCODER
In this section, we introduce the design of the convolutional auto-encoder model with a recursive structure. The model consists of 6 groups of modules, with 3 for the encoder and 3 for the decoder. The model first encodes a variable-length input into a fixed-length vector of size 1024, then decodes back to the same input length. The decoder architecture is a reverse mirror of the encoder. All convolutional layers in this article have zero-padding added to ensure that each convolutional layer outputs the same length as the input. They also all have feature size 256 and kernel size 3. All parameterized layers in our model use ReLU (Nair & Hinton, 2010) as the non-linearity.
In the encoder, the first group of modules consist of n temporal (1-D) convolutional layers. It accepts an one-hot encoded sequence of bytes as input, where each byte is encoded as a 256-dimension vector. This first group of modules transforms the input into an internal representation. We call this group of modules the prefix group. The second group of modules consists of n temporal convolutional layers plus one max-pooling layer of size 2. This group reduces the length of input by a factor
of 2, and it can be applied again and again to recursively reduce the representation length. Therefore, we name this second group the recursion group. The recursion group is applied until the size of representation becomes 1024, which is actually a feature of dimension 256 and length 4. Then, following the final recursion group is a postfix group of n linear layers for feature transformation.
The decoder is a symmetric reverse mirror of the encoder. The decoder prefix group consists of n linear layers, followed by a decoder recursion group that expand the length of representation by a factor of 2. This expansion is done at the first convolutional layer of this group, where it outputs 512 features that will be reshaped into 256 features. The reshaping process we use ensures that feature values correspond to nearby field of view in the input, which is similar to the idea of sub-pixel convolution (or pixel shuffling) (Shi et al., 2016). Figure 3 depics this reshaping process for transforming representation of feature size 4 and length 8 to feature size 2 and length 16. After this recursion group is applied several times (same as that of the encoder recursion group), a decoder postfix group of n convolutional layers is applied to decode the recursive features into a byte sequence.
The final output of the decoder is interpreted as probabilities of bytes after passing through a softmax function. Therefore, the loss we use is simply negative-log likelihood on the individual softmax outputs. It is worth noting that this does not imply that the output bytes are unconditionally independent of each other. For our non-sequential text decoder, the independence between output bytes is conditioned on the representation from the encoder, meaning that their mutual dependence is modeled by the decoder itself. Figure 2 illustrates the difference between sequential and non-sequential text generation using graphical models.
Depending on the length of input and size of the encoded representation, our model can be extremely deep. For example, with n = 8 and encoding dimension 1024 (reduced to a length 4 with 256 features), for a sample length of 1024 bytes, the entire model has 160 parameterized layers. Training such a deep dynamic model can be very challenging using stochastic gradient descent (SGD) due to the gradient vanishing problem (Bengio et al., 1994). Therefore, we use the recently proposed idea of residual connections (He et al., 2016) to make optimization easier. For every pair of adjacent parameterized layers, the input feature representation is passed through to the output by addition. We were unable to train a model designed in this fashion without such residual connections.
For all of our models, we use an encoded representation of dimension 1024 (recursed to length of 4 with 256 features). For an input sample of arbitrary length l, we first append the end-of-sequence null byte to it, and then pad it to length 2dlog2(l+1)e with all zero vectors. This makes the input length a base-2 exponential of some integer, since the recursion groups in both encoder and decoder either reduce or expand the length of representation by a factor of 2. If l < 4, it is padded to size of 4 and does not pass through the recursion groups. It is easy to see that the depth of this dynamic network for a sample of length l is on the order of log2 l, potentially making the hidden representations more efficient and easier to learn than recurrent networks which has a linear order in depth.
3 RESULT FOR MULTI-LINGUAL AUTO-ENCODING
In this section, we show the results of our byte-level recursive convolutional auto-encoder.
3.1 DATASET
All of our datasets are at the level of paragraphs. Minimal pre-processing is applied to them since our model can be applied to all languages in the same fashion. We also constructed a dataset with samples mixed in all three languages to test the model’s ability to handle multi-lingual data.
enwiki. This dataset contains paragraphs from the English Wikipedia 1, constructed from the dump on June 1st, 2016. We were able to obtain 8,484,895 articles, and then split our 7,634,438 for training and 850,457 for testing. The number of paragraphs for training and testing are therefore 41,256,261 and 4,583,893 respectively.
Table 2: Training and testing byte-level errors
DATASET LANGUAGE TRAIN TEST
enwiki English 3.34% 3.34% hudong Chinese 3.21% 3.16% argiga Arabic 3.08% 3.09% engiga English 2.09% 2.08% zhgiga Chinese 5.11% 5.24% allgiga Multi-lingual 2.48% 2.50%
hudong. This dataset contains pragraphs from the Chinese encyclopedia website baike.com 2. We crawled 1,799,095 article entries from it and used 1,618,817 for training and 180,278 for testing. The number of paragraphs for training and testing are 53,675,117 and 5,999,920.
argiga. This dataset contains paragraphs from the Arabic Gigaword Fifth Edition release (Parker et al., 2011a), which is a collection of Arabic newswire articles. In total there are 3,346,167 articles, and we use 3,011,403 for training and 334,764 for testing. As a result, we have 27,989,646 paragraphs for training and 3,116,719 for testing.
engiga. This dataset contains paragraphs from the English Gigaword Fifth Edition release (Parker et al., 2011c), which is a collection of English newswire articles. In total there are 9,876,096 articles, and we use 8,887,583 for training and 988,513 for testing. As a result, we have 116,456,520 paragraphs for training and 12,969,170 for testing.
zhgiga. This dataset contains paragraphs from the Chinese Gigaword Fifth Edition release (Parker et al., 2011b), which is a collection of Chinese newswire articles. In total there are 5,664,377 articles, and we use 5,097,198 for training and 567,179 for testing. As a result, we have 38,094,390 paragraphs for training and 4,237,643 for testing.
allgiga. Since the three Gigaword datasets are very similar to each other, we combined them to form a multi-lingual dataset of newswire article paragraphs. In this dataset, there are 18,886,640 articles with 16,996,184 for training and 1,890,456 for testing. The number of paragraphs for training and testing are 182,540,556 and 20,323,532 respectively.
Table 1 is a summary of these datasets. For such large datasets, testing time could be unacceptably long. Therefore, we report all the results based on 1,000,000 samples randomly sampled from either training or testing subsets depending on the scenario. Very little overfitting was observed even for our largest model.
3.2 RESULT
1https://en.wikipedia.org 2http://www.baike.com/
Regardless of dataset, all of our text autoencoders are trained with the same hyperparameters using stochastic gradient descent (SGD) with momentum (Polyak, 1964) (Sutskever et al., 2013). The model we used has n = 8 – that is, there are 8 parameterized layers in each of prefix, recursion and postfix module groups, for both the encoder and decoder. Each training epoch contains 1,000,000 steps, and each step is trained on a randomly selected sample with length up to 1024 bytes. Therefore, the maximum model depth is 160.
We only back-propagate through valid bytes in the output. Note that each sample contains a end-ofsequence byte (“null” byte) by design.
We set the initial learning rate to 0.001, and half it every 10 epoches. A momentum of 0.9 is applied to speed up training. A small weight decay of 0.00001 is used to stabilize training. Depending on the length of each sample, the encoder or decoder recursion groups are applied for a certain number of times. We find that dividing the gradients of these recursion groups by the number of shared clones can speed up training. The trainng process stops at the 100th epoch.
Note that because engiga and allgiga datasets have more than 100,000,000 training samples, when training stops the model has not seen the entirety of training data. However, further training does not achieve any observable improvement. Table 2 details the byte-level errors for our model on all of the aforementioned datasets. These results indicate that our models can achieve very good error rates for auto-encoding in different languages. The result for allgiga dataset also indicates that the model has no trouble in learning from multi-lingual datasets that contains samples of very different languages.
4 DISCUSSION
This section offers comparisons with recurrent networks, and studies on a set of different properties of our proposed auto-encoding model. Most of these results are performed using the enwiki dataset.
4.1 COMPARISON WITH RECURRENT NETWORKS
We constructed a simple baseline recurrent network using the “vanilla” long short-term memory units (Hochreiter & Schmidhuber, 1997). In this model, both input and output bytes are embedded into vectors of dimension 1024 so that we can use a hidden representation of dimension 1024. The encoder reads the text in reverse order, which was observed by Sutskever et al. (2014) reversing the input sequence can improve quality of outputs. The 1024-dimension hidden output of the last cell is used as the input for the decoder.
The decoder also and input and output bytes embedded into vectors of dimension 1024 and use a hidden representation of dimension 1024. During decoding, the most recently generated byte is fed to the next time step. This is called “teacher forcing” which is observed to improve the autoencoding result in our case. The decoding process uses a beam search algorithm of size 2. During learning, we only back-propagate through the most likely sequence after beam search.
Table 3 details the result for LSTM. The byte-level errors are so large that the results of our models in table 2 are at least one magnitude of doing better. The fundamental limitation of recurrent networks is that regardless of the level of entity (word, character or byte), they can remember around up to 50 of them accurately, and then failed to accurate predict them afterwards. By construction our recursive non-sequential text generation process could hopefully be an alternative solution for this, as already evident in the results here.
4.2 END OF SEQUENCE
One thing that makes a difference between sequential and non-sequential text generation is how to decide when to end the generated string of bytes. For sequential generative process such as recurrent decoders, we could stop when some end-of-sequence symbol is generated. For non-sequential generative process, we could regard the first encountered end-of-sequence symbol as the mark for end, despite that it will inevitably generate some extra symbols after it. Then, a natural question to ask it, is this simple way of determining end-of-sequence effective?
To answer this question, we computed the difference of end-of-sequence symbols between generated text and its groundtruth for 1,000,000 samples, for both the training and testing subsets of the enwiki dataset. What we discovered is that the distribution of length difference is highly concentrated at 0, at 99.63% for both training and testing. Figure 4 shows the full histogram, in which length differences other than 0 is barely visible. This suggests that our non-sequential text generation process can model the end-of-sequence position pretty accurately. One reason for this is that for every samples we have an end-of-sequence symbol – the “null” byte – such that the network has learned to model it pretty early on during the training process.
4.3 RANDOM PERMUTATION OF SAMPLES
One potential problem specific to the task if auto-encoding is the risk of learning the degenerate solution – the identity function. One way to test this is to mutate the input bytes randomly and see whether the error rates match with the mutation probability. We experimented with mutation probability from 0 to 1 with an interval of 0.1, and for each case we tested the byte-level errors for 100,000 samples in both training and testing subsets of the enwiki dataset.
Note that we can compute the byte-level errors in 2 ways. The first is to compute the errors with respect to the groundtruth samples. If the solution is degenerated to the identity function, then the byte-level errors should correlate with the probability of mutation. The second is to compute the errors with respect to the mutated samples. If the solution is degenerated to the identity function,
then the byte-level errors should be near 0 regardless of the mutation probability. Figure 5 shows the results in these 2 ways of computing errors, and the result strongly indicates that the model has not degenerated to learning the identity function.
It is worth noting that the errors with respect to the groundtruth samples in figure 5 also demonstrate that our model lacks the ability to denoise mutated samples. This can be seen from the phenomenon that the errors for each mutation probability is higher than the reference diagonal value, instead of lower. This is due to the lack of a denoising criterion in our training process.
4.4 SAMPLE LENGTH
We also conducted experiments to show how does the bytelevel errors vary with respect to the sample length. Figure 7 shows the histogram of sample lengths for all datasets. It indicates that a majority of paragraph samples can be well modeled under 1024 bytes. Figure 6 shows the byte-level error of our models with respect to the length of samples. This figure is produced by testing 1,000,000 samples from each of training and testing subsets of enwiki dataset. Each bin in the histogram represent a range of 64 with the indicated upper limit. For example, the error at 512 indicate errors aggregated for samples of length 449 to 512.
One interesting phenomenon is that the errors are highly correlated with the number of recursion groups applied for both the encoder and the decoder. In the plot, bins 64, 128, 192- 256, 320-512, 576-1024 represent recursion levels of 4, 5, 6, 7, 8 respectively. The errors for the same recursion level are almost the same to each other, despite huge length differences when the recursion levels get deep. The reason for this is also related to the fact that there there tend to be more shorter texts than longer ones in the dataset, as evidenced in figure 7.
4.5 POOLING LAYERS
This section details an experiment in studying how do the training and testing errors vary with the choice of pooling layers in the encoder network. The experiments are conducted on the aforementioned model with n = 8, and replacing the max-pooling layer in the encoder with average-pooling or L2-pooling layers. Table 4 details the result. The numbers strongly indicate that max-pooling is the best choice. Max-pooling selects the largest values in its field of view, helping the network to achieve better optima (Boureau et al., 2010).
4.6 RECURSION
Table 6: Byte-level errors depending on model depth
n DEPTH TRAIN TEST
2 40 9.05% 9.07% 4 80 5.07% 5.11% 8 160 3.34% 3.34% 16 320 2.91% 2.92%
The use of recursion in the proposed model is from a linguistic intuition that the structure may help the model to learn better representations. However, there is to guarantee that such intuition could be helpful for the model unless comparison is done with a static model that takes fixed-length inputs and pass through a network with the same architecture of the recursion groups without weight sharing.
Figure 8 shows the training and testing errors when training a static model with the same hyper-parameters. The static model takes 1024 bytes, and zero vectors are padded if the sample length is smaller. The recursion group is therefore applied for 8 times in both the encoder and decoder, albeit their weights are not shared. The result indicates that a recursive model not only learns faster, but can also achieve better results. Table 5 lists the byte-level errors.
4.7 MODEL DEPTH
This section explores whether varying the model size can make a different on the result. Table 6 lists the training and testing errors of different model depths with n ∈ {2, 4, 8, 16}. The result indicates that best error rates are achieved with the largest model, with very little overfitting. This is partly due to the fact that our datasets are quite large for the models in question.
5 CONCLUSION
In this article, we propose to auto-encode text using a recursive convolutional network. The model contains 6 parts – 3 for the encoder and 3 for the decoder. The encoder and decoder both contain a prefix module group and a postfix module group for feature transformation. A recusion module group is included in between the prefix and postfix for each of the encoder and decoder, which recursively shrink or expand the length of representation. As a result, our model essentially generate text in a non-sequential fashion.
Experiments using this model are done on 6 large scale datasets in Arabic, Chinese and English. Comparison with recurrent networks is offered to show that our model achieved great results in text auto-encoding. Properties of the proposed model are studied, including its ability to produce the end-of-sequence symbol, whether the model degenerates to the identity function, and variations of pooling layers, recursion and depth of models. In the future, we hope to extend our models to nonsequential generative models without inputs, and use it for more sequence-to-sequence tasks such as machine translation. | 1. What is the focus of the paper, and what contribution does it make to the field of natural language processing?
2. What are the strengths of the proposed architecture, particularly in comparison to LSTM networks?
3. Are there any concerns regarding the choice of text encoding length and the minimal pre-processing applied to the datasets?
4. How does the reviewer assess the fairness of the comparison between the proposed CNN architecture and a simple LSTM network?
5. What are some minor issues with the paper, such as linguistic and spelling mistakes, and how do they affect its readability?
6. What is the significance of the reference to a 1994 work on the vanishing gradient problem, and how does it relate to the current research? | Review | Review
This paper presents a convolutional auto-encoder architecture for text encoding and generation. It works on the character level and contains a recursive structure which scales with the length of the input text. Building on the recent state-of-the-art in terms of architectural components, the paper shows the feasibility of this architecture and compares it to LSTM, showing the cnn superiority for auto-encoding.
The authors have decided to encode the text into a length of 1024 - Why? Would different lengths result in a better performance?
You write "Minimal pre-processing is applied to them since our model can be applied to all languages in the same fashion." Please be more specific. Which pre-processing do you apply for each dataset?
I wonder if the comparison to a simple LSTM network is fair. It would be better to use a 2- or 3-layer network. Also, BLSTM are used nowadays.
A strong part of this paper is the large amount of investigation and extra experiments.
Minor issues:
Please correct minor linguistic mistakes as well as spelling mistakes. In Fig. 3, for example, the t of Different is missing.
An issue making it hard to read the paper is that most of the figures appear on another page than where they are mentioned in the text.
the authors have chosen to cite a work from 1994 for the vanishing gradient problem. Note, that many (also earlier) works have reported this problem in different ways. A good analysis of all researches is performed in Hochreiter, S., Bengio, Y., Frasconi, P., and Schmidhuber, J. (2001) "Gradient flow in recurrent nets: the difficulty of learning long-term dependencies". |
ICLR | Title
Byte-Level Recursive Convolutional Auto-Encoder for Text
Abstract
This article proposes to auto-encode text at byte-level using convolutional networks with a recursive architecture. The motivation is to explore whether it is possible to have scalable and homogeneous text generation at byte-level in a nonsequential fashion through the simple task of auto-encoding. We show that nonsequential text generation from a fixed-length representation is not only possible, but also achieved much better auto-encoding results than recurrent networks. The proposed model is a multi-stage deep convolutional encoder-decoder framework using residual connections (He et al., 2016), containing up to 160 parameterized layers. Each encoder or decoder contains a shared group of modules that consists of either pooling or upsampling layers, making the network recursive in terms of abstraction levels in representation. Results for 6 large-scale paragraph datasets are reported, in 3 languages including Arabic, Chinese and English. Analyses are conducted to study several properties of the proposed model.
N/A
This article proposes to auto-encode text at byte-level using convolutional networks with a recursive architecture. The motivation is to explore whether it is possible to have scalable and homogeneous text generation at byte-level in a nonsequential fashion through the simple task of auto-encoding. We show that nonsequential text generation from a fixed-length representation is not only possible, but also achieved much better auto-encoding results than recurrent networks. The proposed model is a multi-stage deep convolutional encoder-decoder framework using residual connections (He et al., 2016), containing up to 160 parameterized layers. Each encoder or decoder contains a shared group of modules that consists of either pooling or upsampling layers, making the network recursive in terms of abstraction levels in representation. Results for 6 large-scale paragraph datasets are reported, in 3 languages including Arabic, Chinese and English. Analyses are conducted to study several properties of the proposed model.
1 INTRODUCTION
Recently, generating text using convolutional networks (ConvNets) starts to become an alternative to recurrent networks for sequence-tosequence learning (Gehring et al., 2017). The dominant assumption for both these approaches is that texts are generated one word at a time. Such sequential generation process bears the risk of output or gradient vanishing or exploding problem (Bengio et al., 1994), which limits the length of its generated results. Such limitation in scalability prompts us to explore whether non-sequential text generation is possible.
Meanwhile, text processing from lower levels than words – such as characters (Zhang et al., 2015) (Kim et al., 2016) and bytes (Gillick et al., 2016) (Zhang & LeCun, 2017) – is also being explored due to its promise in handling distinct languages in the same fashion. In particular, the work by Zhang & LeCun (2017) shows that simple one-hot encoding on bytes could give the best results for text classification in a variety of languages. The reason is that it achieved the best balance between computational performance and classification accuracy. Inspired by these results, this article explores auto-encoding for text using byte-level convolutional networks that has a recursive structure, as a first step towards low-level and non-sequential text generation.
For the task of text auto-encoding, we should avoid the use of common attention mechanisms like those used in machine translation Bahdanau et al. (2015), because they always provide a direct information path that enables the auto-encoder to directly copy from the input. This diminishes the
purpose of studying the representational ability of different models. Therefore, all models considered in this article would encode to and decode from a fixed-length vector representation.
The paper by Zhang et al. (2017) was an anterior result on using word-level convolutional networks for text auto-encoding. This article differs from it in several key ways of using convolutional networks. First of all, our models work from the level of bytes instead of words, which arguably makes the problem more challenging. Secondly, our network is dynamic with a recursive structure that scales with the length of input text, which by design could avoid trivial solutions for auto-encoding such as the identity function. Thirdly, by using the latest design heuristics such as residual connections (He et al., 2016), our network can scale up to several hundred of layers deep, compared to a static network that contains a few layers.
In this article, several properties of the auto-encoding model are studied. The following is a list.
1. Applying the model to 3 languages – Arabic, Chinese and English – shows that the model can handle all different languages in the same fashion with equally good accuracy.
2. Comparisons with long short-term memory (LSTM) (Hochreiter & Schmidhuber, 1997) show a significant advantage of using convolutional networks for text auto-encoding.
3. We determined that a recursive convolutional decoder like ours can accurately produce the end-of-string byte, despite that the decoding process is non-sequential.
4. By studying the auto-encoding error when the samples contain randomized noisy bytes, we show that the model does not degenerate to the identity function. However, it can neither denoise the input very well.
5. The recursive structure requires a pooling layer. We compared between average pooling, L2 pooling and max-pooling, and determined that max-pooling is the best choice.
6. The advantage of recursion is established by comparison against a static model that does not have shared module groups. This shows that linguistic heuristics such as recursion is useful for designing models for language processing.
7. We also explored models of different sizes by varying the maximum network depth from 40 to 320. The results show that deeper models give better results.
2 BYTE-LEVEL RECURSIVE CONVOLUTIONAL AUTO-ENCODER
In this section, we introduce the design of the convolutional auto-encoder model with a recursive structure. The model consists of 6 groups of modules, with 3 for the encoder and 3 for the decoder. The model first encodes a variable-length input into a fixed-length vector of size 1024, then decodes back to the same input length. The decoder architecture is a reverse mirror of the encoder. All convolutional layers in this article have zero-padding added to ensure that each convolutional layer outputs the same length as the input. They also all have feature size 256 and kernel size 3. All parameterized layers in our model use ReLU (Nair & Hinton, 2010) as the non-linearity.
In the encoder, the first group of modules consist of n temporal (1-D) convolutional layers. It accepts an one-hot encoded sequence of bytes as input, where each byte is encoded as a 256-dimension vector. This first group of modules transforms the input into an internal representation. We call this group of modules the prefix group. The second group of modules consists of n temporal convolutional layers plus one max-pooling layer of size 2. This group reduces the length of input by a factor
of 2, and it can be applied again and again to recursively reduce the representation length. Therefore, we name this second group the recursion group. The recursion group is applied until the size of representation becomes 1024, which is actually a feature of dimension 256 and length 4. Then, following the final recursion group is a postfix group of n linear layers for feature transformation.
The decoder is a symmetric reverse mirror of the encoder. The decoder prefix group consists of n linear layers, followed by a decoder recursion group that expand the length of representation by a factor of 2. This expansion is done at the first convolutional layer of this group, where it outputs 512 features that will be reshaped into 256 features. The reshaping process we use ensures that feature values correspond to nearby field of view in the input, which is similar to the idea of sub-pixel convolution (or pixel shuffling) (Shi et al., 2016). Figure 3 depics this reshaping process for transforming representation of feature size 4 and length 8 to feature size 2 and length 16. After this recursion group is applied several times (same as that of the encoder recursion group), a decoder postfix group of n convolutional layers is applied to decode the recursive features into a byte sequence.
The final output of the decoder is interpreted as probabilities of bytes after passing through a softmax function. Therefore, the loss we use is simply negative-log likelihood on the individual softmax outputs. It is worth noting that this does not imply that the output bytes are unconditionally independent of each other. For our non-sequential text decoder, the independence between output bytes is conditioned on the representation from the encoder, meaning that their mutual dependence is modeled by the decoder itself. Figure 2 illustrates the difference between sequential and non-sequential text generation using graphical models.
Depending on the length of input and size of the encoded representation, our model can be extremely deep. For example, with n = 8 and encoding dimension 1024 (reduced to a length 4 with 256 features), for a sample length of 1024 bytes, the entire model has 160 parameterized layers. Training such a deep dynamic model can be very challenging using stochastic gradient descent (SGD) due to the gradient vanishing problem (Bengio et al., 1994). Therefore, we use the recently proposed idea of residual connections (He et al., 2016) to make optimization easier. For every pair of adjacent parameterized layers, the input feature representation is passed through to the output by addition. We were unable to train a model designed in this fashion without such residual connections.
For all of our models, we use an encoded representation of dimension 1024 (recursed to length of 4 with 256 features). For an input sample of arbitrary length l, we first append the end-of-sequence null byte to it, and then pad it to length 2dlog2(l+1)e with all zero vectors. This makes the input length a base-2 exponential of some integer, since the recursion groups in both encoder and decoder either reduce or expand the length of representation by a factor of 2. If l < 4, it is padded to size of 4 and does not pass through the recursion groups. It is easy to see that the depth of this dynamic network for a sample of length l is on the order of log2 l, potentially making the hidden representations more efficient and easier to learn than recurrent networks which has a linear order in depth.
3 RESULT FOR MULTI-LINGUAL AUTO-ENCODING
In this section, we show the results of our byte-level recursive convolutional auto-encoder.
3.1 DATASET
All of our datasets are at the level of paragraphs. Minimal pre-processing is applied to them since our model can be applied to all languages in the same fashion. We also constructed a dataset with samples mixed in all three languages to test the model’s ability to handle multi-lingual data.
enwiki. This dataset contains paragraphs from the English Wikipedia 1, constructed from the dump on June 1st, 2016. We were able to obtain 8,484,895 articles, and then split our 7,634,438 for training and 850,457 for testing. The number of paragraphs for training and testing are therefore 41,256,261 and 4,583,893 respectively.
Table 2: Training and testing byte-level errors
DATASET LANGUAGE TRAIN TEST
enwiki English 3.34% 3.34% hudong Chinese 3.21% 3.16% argiga Arabic 3.08% 3.09% engiga English 2.09% 2.08% zhgiga Chinese 5.11% 5.24% allgiga Multi-lingual 2.48% 2.50%
hudong. This dataset contains pragraphs from the Chinese encyclopedia website baike.com 2. We crawled 1,799,095 article entries from it and used 1,618,817 for training and 180,278 for testing. The number of paragraphs for training and testing are 53,675,117 and 5,999,920.
argiga. This dataset contains paragraphs from the Arabic Gigaword Fifth Edition release (Parker et al., 2011a), which is a collection of Arabic newswire articles. In total there are 3,346,167 articles, and we use 3,011,403 for training and 334,764 for testing. As a result, we have 27,989,646 paragraphs for training and 3,116,719 for testing.
engiga. This dataset contains paragraphs from the English Gigaword Fifth Edition release (Parker et al., 2011c), which is a collection of English newswire articles. In total there are 9,876,096 articles, and we use 8,887,583 for training and 988,513 for testing. As a result, we have 116,456,520 paragraphs for training and 12,969,170 for testing.
zhgiga. This dataset contains paragraphs from the Chinese Gigaword Fifth Edition release (Parker et al., 2011b), which is a collection of Chinese newswire articles. In total there are 5,664,377 articles, and we use 5,097,198 for training and 567,179 for testing. As a result, we have 38,094,390 paragraphs for training and 4,237,643 for testing.
allgiga. Since the three Gigaword datasets are very similar to each other, we combined them to form a multi-lingual dataset of newswire article paragraphs. In this dataset, there are 18,886,640 articles with 16,996,184 for training and 1,890,456 for testing. The number of paragraphs for training and testing are 182,540,556 and 20,323,532 respectively.
Table 1 is a summary of these datasets. For such large datasets, testing time could be unacceptably long. Therefore, we report all the results based on 1,000,000 samples randomly sampled from either training or testing subsets depending on the scenario. Very little overfitting was observed even for our largest model.
3.2 RESULT
1https://en.wikipedia.org 2http://www.baike.com/
Regardless of dataset, all of our text autoencoders are trained with the same hyperparameters using stochastic gradient descent (SGD) with momentum (Polyak, 1964) (Sutskever et al., 2013). The model we used has n = 8 – that is, there are 8 parameterized layers in each of prefix, recursion and postfix module groups, for both the encoder and decoder. Each training epoch contains 1,000,000 steps, and each step is trained on a randomly selected sample with length up to 1024 bytes. Therefore, the maximum model depth is 160.
We only back-propagate through valid bytes in the output. Note that each sample contains a end-ofsequence byte (“null” byte) by design.
We set the initial learning rate to 0.001, and half it every 10 epoches. A momentum of 0.9 is applied to speed up training. A small weight decay of 0.00001 is used to stabilize training. Depending on the length of each sample, the encoder or decoder recursion groups are applied for a certain number of times. We find that dividing the gradients of these recursion groups by the number of shared clones can speed up training. The trainng process stops at the 100th epoch.
Note that because engiga and allgiga datasets have more than 100,000,000 training samples, when training stops the model has not seen the entirety of training data. However, further training does not achieve any observable improvement. Table 2 details the byte-level errors for our model on all of the aforementioned datasets. These results indicate that our models can achieve very good error rates for auto-encoding in different languages. The result for allgiga dataset also indicates that the model has no trouble in learning from multi-lingual datasets that contains samples of very different languages.
4 DISCUSSION
This section offers comparisons with recurrent networks, and studies on a set of different properties of our proposed auto-encoding model. Most of these results are performed using the enwiki dataset.
4.1 COMPARISON WITH RECURRENT NETWORKS
We constructed a simple baseline recurrent network using the “vanilla” long short-term memory units (Hochreiter & Schmidhuber, 1997). In this model, both input and output bytes are embedded into vectors of dimension 1024 so that we can use a hidden representation of dimension 1024. The encoder reads the text in reverse order, which was observed by Sutskever et al. (2014) reversing the input sequence can improve quality of outputs. The 1024-dimension hidden output of the last cell is used as the input for the decoder.
The decoder also and input and output bytes embedded into vectors of dimension 1024 and use a hidden representation of dimension 1024. During decoding, the most recently generated byte is fed to the next time step. This is called “teacher forcing” which is observed to improve the autoencoding result in our case. The decoding process uses a beam search algorithm of size 2. During learning, we only back-propagate through the most likely sequence after beam search.
Table 3 details the result for LSTM. The byte-level errors are so large that the results of our models in table 2 are at least one magnitude of doing better. The fundamental limitation of recurrent networks is that regardless of the level of entity (word, character or byte), they can remember around up to 50 of them accurately, and then failed to accurate predict them afterwards. By construction our recursive non-sequential text generation process could hopefully be an alternative solution for this, as already evident in the results here.
4.2 END OF SEQUENCE
One thing that makes a difference between sequential and non-sequential text generation is how to decide when to end the generated string of bytes. For sequential generative process such as recurrent decoders, we could stop when some end-of-sequence symbol is generated. For non-sequential generative process, we could regard the first encountered end-of-sequence symbol as the mark for end, despite that it will inevitably generate some extra symbols after it. Then, a natural question to ask it, is this simple way of determining end-of-sequence effective?
To answer this question, we computed the difference of end-of-sequence symbols between generated text and its groundtruth for 1,000,000 samples, for both the training and testing subsets of the enwiki dataset. What we discovered is that the distribution of length difference is highly concentrated at 0, at 99.63% for both training and testing. Figure 4 shows the full histogram, in which length differences other than 0 is barely visible. This suggests that our non-sequential text generation process can model the end-of-sequence position pretty accurately. One reason for this is that for every samples we have an end-of-sequence symbol – the “null” byte – such that the network has learned to model it pretty early on during the training process.
4.3 RANDOM PERMUTATION OF SAMPLES
One potential problem specific to the task if auto-encoding is the risk of learning the degenerate solution – the identity function. One way to test this is to mutate the input bytes randomly and see whether the error rates match with the mutation probability. We experimented with mutation probability from 0 to 1 with an interval of 0.1, and for each case we tested the byte-level errors for 100,000 samples in both training and testing subsets of the enwiki dataset.
Note that we can compute the byte-level errors in 2 ways. The first is to compute the errors with respect to the groundtruth samples. If the solution is degenerated to the identity function, then the byte-level errors should correlate with the probability of mutation. The second is to compute the errors with respect to the mutated samples. If the solution is degenerated to the identity function,
then the byte-level errors should be near 0 regardless of the mutation probability. Figure 5 shows the results in these 2 ways of computing errors, and the result strongly indicates that the model has not degenerated to learning the identity function.
It is worth noting that the errors with respect to the groundtruth samples in figure 5 also demonstrate that our model lacks the ability to denoise mutated samples. This can be seen from the phenomenon that the errors for each mutation probability is higher than the reference diagonal value, instead of lower. This is due to the lack of a denoising criterion in our training process.
4.4 SAMPLE LENGTH
We also conducted experiments to show how does the bytelevel errors vary with respect to the sample length. Figure 7 shows the histogram of sample lengths for all datasets. It indicates that a majority of paragraph samples can be well modeled under 1024 bytes. Figure 6 shows the byte-level error of our models with respect to the length of samples. This figure is produced by testing 1,000,000 samples from each of training and testing subsets of enwiki dataset. Each bin in the histogram represent a range of 64 with the indicated upper limit. For example, the error at 512 indicate errors aggregated for samples of length 449 to 512.
One interesting phenomenon is that the errors are highly correlated with the number of recursion groups applied for both the encoder and the decoder. In the plot, bins 64, 128, 192- 256, 320-512, 576-1024 represent recursion levels of 4, 5, 6, 7, 8 respectively. The errors for the same recursion level are almost the same to each other, despite huge length differences when the recursion levels get deep. The reason for this is also related to the fact that there there tend to be more shorter texts than longer ones in the dataset, as evidenced in figure 7.
4.5 POOLING LAYERS
This section details an experiment in studying how do the training and testing errors vary with the choice of pooling layers in the encoder network. The experiments are conducted on the aforementioned model with n = 8, and replacing the max-pooling layer in the encoder with average-pooling or L2-pooling layers. Table 4 details the result. The numbers strongly indicate that max-pooling is the best choice. Max-pooling selects the largest values in its field of view, helping the network to achieve better optima (Boureau et al., 2010).
4.6 RECURSION
Table 6: Byte-level errors depending on model depth
n DEPTH TRAIN TEST
2 40 9.05% 9.07% 4 80 5.07% 5.11% 8 160 3.34% 3.34% 16 320 2.91% 2.92%
The use of recursion in the proposed model is from a linguistic intuition that the structure may help the model to learn better representations. However, there is to guarantee that such intuition could be helpful for the model unless comparison is done with a static model that takes fixed-length inputs and pass through a network with the same architecture of the recursion groups without weight sharing.
Figure 8 shows the training and testing errors when training a static model with the same hyper-parameters. The static model takes 1024 bytes, and zero vectors are padded if the sample length is smaller. The recursion group is therefore applied for 8 times in both the encoder and decoder, albeit their weights are not shared. The result indicates that a recursive model not only learns faster, but can also achieve better results. Table 5 lists the byte-level errors.
4.7 MODEL DEPTH
This section explores whether varying the model size can make a different on the result. Table 6 lists the training and testing errors of different model depths with n ∈ {2, 4, 8, 16}. The result indicates that best error rates are achieved with the largest model, with very little overfitting. This is partly due to the fact that our datasets are quite large for the models in question.
5 CONCLUSION
In this article, we propose to auto-encode text using a recursive convolutional network. The model contains 6 parts – 3 for the encoder and 3 for the decoder. The encoder and decoder both contain a prefix module group and a postfix module group for feature transformation. A recusion module group is included in between the prefix and postfix for each of the encoder and decoder, which recursively shrink or expand the length of representation. As a result, our model essentially generate text in a non-sequential fashion.
Experiments using this model are done on 6 large scale datasets in Arabic, Chinese and English. Comparison with recurrent networks is offered to show that our model achieved great results in text auto-encoding. Properties of the proposed model are studied, including its ability to produce the end-of-sequence symbol, whether the model degenerates to the identity function, and variations of pooling layers, recursion and depth of models. In the future, we hope to extend our models to nonsequential generative models without inputs, and use it for more sequence-to-sequence tasks such as machine translation. | 1. What is the main contribution of the paper regarding autoencoding text?
2. What are the strengths of the proposed approach, particularly in the use of a convolutional network and shared filters?
3. What are the weaknesses of the paper, especially in terms of experimental design and comparison with other works?
4. Do you have any concerns about the ability of the autoencoder to learn meaningful representations in the hidden layer?
5. How does the reviewer assess the clarity and quality of the paper's content? | Review | Review
The authors propose autoencoding text using a byte-level encoding and a convolutional network with shared filters such that the encoder and decoder should exhibit recursive structure. They show that the model can handle various languages and run various experiments testing the ability of the autoencoder to reconstruct the text with varying lengths, perturbations, depths, etc.
The writing is fairly clear, though many of the charts and tables are hard to decipher without labels (and in Figure 8, training errors are not visible -- maybe they overlap completely?).
Main concern would be the lack of experiments showing that the network learns meaningful representations in the hidden layer. E.g. through semi-supervised learning experiments or experiments on learning semantic relatedness of sentences. Obvious citations such as https://arxiv.org/pdf/1511.06349.pdf and https://arxiv.org/pdf/1503.00075.pdf are missing, along with associated baselines. Although the experiment with randomly permuting the samples is nice, would hesitate to draw any conclusions without results on downstream tasks and a clearer survey of the literature. |
ICLR | Title
Byte-Level Recursive Convolutional Auto-Encoder for Text
Abstract
This article proposes to auto-encode text at byte-level using convolutional networks with a recursive architecture. The motivation is to explore whether it is possible to have scalable and homogeneous text generation at byte-level in a nonsequential fashion through the simple task of auto-encoding. We show that nonsequential text generation from a fixed-length representation is not only possible, but also achieved much better auto-encoding results than recurrent networks. The proposed model is a multi-stage deep convolutional encoder-decoder framework using residual connections (He et al., 2016), containing up to 160 parameterized layers. Each encoder or decoder contains a shared group of modules that consists of either pooling or upsampling layers, making the network recursive in terms of abstraction levels in representation. Results for 6 large-scale paragraph datasets are reported, in 3 languages including Arabic, Chinese and English. Analyses are conducted to study several properties of the proposed model.
N/A
This article proposes to auto-encode text at byte-level using convolutional networks with a recursive architecture. The motivation is to explore whether it is possible to have scalable and homogeneous text generation at byte-level in a nonsequential fashion through the simple task of auto-encoding. We show that nonsequential text generation from a fixed-length representation is not only possible, but also achieved much better auto-encoding results than recurrent networks. The proposed model is a multi-stage deep convolutional encoder-decoder framework using residual connections (He et al., 2016), containing up to 160 parameterized layers. Each encoder or decoder contains a shared group of modules that consists of either pooling or upsampling layers, making the network recursive in terms of abstraction levels in representation. Results for 6 large-scale paragraph datasets are reported, in 3 languages including Arabic, Chinese and English. Analyses are conducted to study several properties of the proposed model.
1 INTRODUCTION
Recently, generating text using convolutional networks (ConvNets) starts to become an alternative to recurrent networks for sequence-tosequence learning (Gehring et al., 2017). The dominant assumption for both these approaches is that texts are generated one word at a time. Such sequential generation process bears the risk of output or gradient vanishing or exploding problem (Bengio et al., 1994), which limits the length of its generated results. Such limitation in scalability prompts us to explore whether non-sequential text generation is possible.
Meanwhile, text processing from lower levels than words – such as characters (Zhang et al., 2015) (Kim et al., 2016) and bytes (Gillick et al., 2016) (Zhang & LeCun, 2017) – is also being explored due to its promise in handling distinct languages in the same fashion. In particular, the work by Zhang & LeCun (2017) shows that simple one-hot encoding on bytes could give the best results for text classification in a variety of languages. The reason is that it achieved the best balance between computational performance and classification accuracy. Inspired by these results, this article explores auto-encoding for text using byte-level convolutional networks that has a recursive structure, as a first step towards low-level and non-sequential text generation.
For the task of text auto-encoding, we should avoid the use of common attention mechanisms like those used in machine translation Bahdanau et al. (2015), because they always provide a direct information path that enables the auto-encoder to directly copy from the input. This diminishes the
purpose of studying the representational ability of different models. Therefore, all models considered in this article would encode to and decode from a fixed-length vector representation.
The paper by Zhang et al. (2017) was an anterior result on using word-level convolutional networks for text auto-encoding. This article differs from it in several key ways of using convolutional networks. First of all, our models work from the level of bytes instead of words, which arguably makes the problem more challenging. Secondly, our network is dynamic with a recursive structure that scales with the length of input text, which by design could avoid trivial solutions for auto-encoding such as the identity function. Thirdly, by using the latest design heuristics such as residual connections (He et al., 2016), our network can scale up to several hundred of layers deep, compared to a static network that contains a few layers.
In this article, several properties of the auto-encoding model are studied. The following is a list.
1. Applying the model to 3 languages – Arabic, Chinese and English – shows that the model can handle all different languages in the same fashion with equally good accuracy.
2. Comparisons with long short-term memory (LSTM) (Hochreiter & Schmidhuber, 1997) show a significant advantage of using convolutional networks for text auto-encoding.
3. We determined that a recursive convolutional decoder like ours can accurately produce the end-of-string byte, despite that the decoding process is non-sequential.
4. By studying the auto-encoding error when the samples contain randomized noisy bytes, we show that the model does not degenerate to the identity function. However, it can neither denoise the input very well.
5. The recursive structure requires a pooling layer. We compared between average pooling, L2 pooling and max-pooling, and determined that max-pooling is the best choice.
6. The advantage of recursion is established by comparison against a static model that does not have shared module groups. This shows that linguistic heuristics such as recursion is useful for designing models for language processing.
7. We also explored models of different sizes by varying the maximum network depth from 40 to 320. The results show that deeper models give better results.
2 BYTE-LEVEL RECURSIVE CONVOLUTIONAL AUTO-ENCODER
In this section, we introduce the design of the convolutional auto-encoder model with a recursive structure. The model consists of 6 groups of modules, with 3 for the encoder and 3 for the decoder. The model first encodes a variable-length input into a fixed-length vector of size 1024, then decodes back to the same input length. The decoder architecture is a reverse mirror of the encoder. All convolutional layers in this article have zero-padding added to ensure that each convolutional layer outputs the same length as the input. They also all have feature size 256 and kernel size 3. All parameterized layers in our model use ReLU (Nair & Hinton, 2010) as the non-linearity.
In the encoder, the first group of modules consist of n temporal (1-D) convolutional layers. It accepts an one-hot encoded sequence of bytes as input, where each byte is encoded as a 256-dimension vector. This first group of modules transforms the input into an internal representation. We call this group of modules the prefix group. The second group of modules consists of n temporal convolutional layers plus one max-pooling layer of size 2. This group reduces the length of input by a factor
of 2, and it can be applied again and again to recursively reduce the representation length. Therefore, we name this second group the recursion group. The recursion group is applied until the size of representation becomes 1024, which is actually a feature of dimension 256 and length 4. Then, following the final recursion group is a postfix group of n linear layers for feature transformation.
The decoder is a symmetric reverse mirror of the encoder. The decoder prefix group consists of n linear layers, followed by a decoder recursion group that expand the length of representation by a factor of 2. This expansion is done at the first convolutional layer of this group, where it outputs 512 features that will be reshaped into 256 features. The reshaping process we use ensures that feature values correspond to nearby field of view in the input, which is similar to the idea of sub-pixel convolution (or pixel shuffling) (Shi et al., 2016). Figure 3 depics this reshaping process for transforming representation of feature size 4 and length 8 to feature size 2 and length 16. After this recursion group is applied several times (same as that of the encoder recursion group), a decoder postfix group of n convolutional layers is applied to decode the recursive features into a byte sequence.
The final output of the decoder is interpreted as probabilities of bytes after passing through a softmax function. Therefore, the loss we use is simply negative-log likelihood on the individual softmax outputs. It is worth noting that this does not imply that the output bytes are unconditionally independent of each other. For our non-sequential text decoder, the independence between output bytes is conditioned on the representation from the encoder, meaning that their mutual dependence is modeled by the decoder itself. Figure 2 illustrates the difference between sequential and non-sequential text generation using graphical models.
Depending on the length of input and size of the encoded representation, our model can be extremely deep. For example, with n = 8 and encoding dimension 1024 (reduced to a length 4 with 256 features), for a sample length of 1024 bytes, the entire model has 160 parameterized layers. Training such a deep dynamic model can be very challenging using stochastic gradient descent (SGD) due to the gradient vanishing problem (Bengio et al., 1994). Therefore, we use the recently proposed idea of residual connections (He et al., 2016) to make optimization easier. For every pair of adjacent parameterized layers, the input feature representation is passed through to the output by addition. We were unable to train a model designed in this fashion without such residual connections.
For all of our models, we use an encoded representation of dimension 1024 (recursed to length of 4 with 256 features). For an input sample of arbitrary length l, we first append the end-of-sequence null byte to it, and then pad it to length 2dlog2(l+1)e with all zero vectors. This makes the input length a base-2 exponential of some integer, since the recursion groups in both encoder and decoder either reduce or expand the length of representation by a factor of 2. If l < 4, it is padded to size of 4 and does not pass through the recursion groups. It is easy to see that the depth of this dynamic network for a sample of length l is on the order of log2 l, potentially making the hidden representations more efficient and easier to learn than recurrent networks which has a linear order in depth.
3 RESULT FOR MULTI-LINGUAL AUTO-ENCODING
In this section, we show the results of our byte-level recursive convolutional auto-encoder.
3.1 DATASET
All of our datasets are at the level of paragraphs. Minimal pre-processing is applied to them since our model can be applied to all languages in the same fashion. We also constructed a dataset with samples mixed in all three languages to test the model’s ability to handle multi-lingual data.
enwiki. This dataset contains paragraphs from the English Wikipedia 1, constructed from the dump on June 1st, 2016. We were able to obtain 8,484,895 articles, and then split our 7,634,438 for training and 850,457 for testing. The number of paragraphs for training and testing are therefore 41,256,261 and 4,583,893 respectively.
Table 2: Training and testing byte-level errors
DATASET LANGUAGE TRAIN TEST
enwiki English 3.34% 3.34% hudong Chinese 3.21% 3.16% argiga Arabic 3.08% 3.09% engiga English 2.09% 2.08% zhgiga Chinese 5.11% 5.24% allgiga Multi-lingual 2.48% 2.50%
hudong. This dataset contains pragraphs from the Chinese encyclopedia website baike.com 2. We crawled 1,799,095 article entries from it and used 1,618,817 for training and 180,278 for testing. The number of paragraphs for training and testing are 53,675,117 and 5,999,920.
argiga. This dataset contains paragraphs from the Arabic Gigaword Fifth Edition release (Parker et al., 2011a), which is a collection of Arabic newswire articles. In total there are 3,346,167 articles, and we use 3,011,403 for training and 334,764 for testing. As a result, we have 27,989,646 paragraphs for training and 3,116,719 for testing.
engiga. This dataset contains paragraphs from the English Gigaword Fifth Edition release (Parker et al., 2011c), which is a collection of English newswire articles. In total there are 9,876,096 articles, and we use 8,887,583 for training and 988,513 for testing. As a result, we have 116,456,520 paragraphs for training and 12,969,170 for testing.
zhgiga. This dataset contains paragraphs from the Chinese Gigaword Fifth Edition release (Parker et al., 2011b), which is a collection of Chinese newswire articles. In total there are 5,664,377 articles, and we use 5,097,198 for training and 567,179 for testing. As a result, we have 38,094,390 paragraphs for training and 4,237,643 for testing.
allgiga. Since the three Gigaword datasets are very similar to each other, we combined them to form a multi-lingual dataset of newswire article paragraphs. In this dataset, there are 18,886,640 articles with 16,996,184 for training and 1,890,456 for testing. The number of paragraphs for training and testing are 182,540,556 and 20,323,532 respectively.
Table 1 is a summary of these datasets. For such large datasets, testing time could be unacceptably long. Therefore, we report all the results based on 1,000,000 samples randomly sampled from either training or testing subsets depending on the scenario. Very little overfitting was observed even for our largest model.
3.2 RESULT
1https://en.wikipedia.org 2http://www.baike.com/
Regardless of dataset, all of our text autoencoders are trained with the same hyperparameters using stochastic gradient descent (SGD) with momentum (Polyak, 1964) (Sutskever et al., 2013). The model we used has n = 8 – that is, there are 8 parameterized layers in each of prefix, recursion and postfix module groups, for both the encoder and decoder. Each training epoch contains 1,000,000 steps, and each step is trained on a randomly selected sample with length up to 1024 bytes. Therefore, the maximum model depth is 160.
We only back-propagate through valid bytes in the output. Note that each sample contains a end-ofsequence byte (“null” byte) by design.
We set the initial learning rate to 0.001, and half it every 10 epoches. A momentum of 0.9 is applied to speed up training. A small weight decay of 0.00001 is used to stabilize training. Depending on the length of each sample, the encoder or decoder recursion groups are applied for a certain number of times. We find that dividing the gradients of these recursion groups by the number of shared clones can speed up training. The trainng process stops at the 100th epoch.
Note that because engiga and allgiga datasets have more than 100,000,000 training samples, when training stops the model has not seen the entirety of training data. However, further training does not achieve any observable improvement. Table 2 details the byte-level errors for our model on all of the aforementioned datasets. These results indicate that our models can achieve very good error rates for auto-encoding in different languages. The result for allgiga dataset also indicates that the model has no trouble in learning from multi-lingual datasets that contains samples of very different languages.
4 DISCUSSION
This section offers comparisons with recurrent networks, and studies on a set of different properties of our proposed auto-encoding model. Most of these results are performed using the enwiki dataset.
4.1 COMPARISON WITH RECURRENT NETWORKS
We constructed a simple baseline recurrent network using the “vanilla” long short-term memory units (Hochreiter & Schmidhuber, 1997). In this model, both input and output bytes are embedded into vectors of dimension 1024 so that we can use a hidden representation of dimension 1024. The encoder reads the text in reverse order, which was observed by Sutskever et al. (2014) reversing the input sequence can improve quality of outputs. The 1024-dimension hidden output of the last cell is used as the input for the decoder.
The decoder also and input and output bytes embedded into vectors of dimension 1024 and use a hidden representation of dimension 1024. During decoding, the most recently generated byte is fed to the next time step. This is called “teacher forcing” which is observed to improve the autoencoding result in our case. The decoding process uses a beam search algorithm of size 2. During learning, we only back-propagate through the most likely sequence after beam search.
Table 3 details the result for LSTM. The byte-level errors are so large that the results of our models in table 2 are at least one magnitude of doing better. The fundamental limitation of recurrent networks is that regardless of the level of entity (word, character or byte), they can remember around up to 50 of them accurately, and then failed to accurate predict them afterwards. By construction our recursive non-sequential text generation process could hopefully be an alternative solution for this, as already evident in the results here.
4.2 END OF SEQUENCE
One thing that makes a difference between sequential and non-sequential text generation is how to decide when to end the generated string of bytes. For sequential generative process such as recurrent decoders, we could stop when some end-of-sequence symbol is generated. For non-sequential generative process, we could regard the first encountered end-of-sequence symbol as the mark for end, despite that it will inevitably generate some extra symbols after it. Then, a natural question to ask it, is this simple way of determining end-of-sequence effective?
To answer this question, we computed the difference of end-of-sequence symbols between generated text and its groundtruth for 1,000,000 samples, for both the training and testing subsets of the enwiki dataset. What we discovered is that the distribution of length difference is highly concentrated at 0, at 99.63% for both training and testing. Figure 4 shows the full histogram, in which length differences other than 0 is barely visible. This suggests that our non-sequential text generation process can model the end-of-sequence position pretty accurately. One reason for this is that for every samples we have an end-of-sequence symbol – the “null” byte – such that the network has learned to model it pretty early on during the training process.
4.3 RANDOM PERMUTATION OF SAMPLES
One potential problem specific to the task if auto-encoding is the risk of learning the degenerate solution – the identity function. One way to test this is to mutate the input bytes randomly and see whether the error rates match with the mutation probability. We experimented with mutation probability from 0 to 1 with an interval of 0.1, and for each case we tested the byte-level errors for 100,000 samples in both training and testing subsets of the enwiki dataset.
Note that we can compute the byte-level errors in 2 ways. The first is to compute the errors with respect to the groundtruth samples. If the solution is degenerated to the identity function, then the byte-level errors should correlate with the probability of mutation. The second is to compute the errors with respect to the mutated samples. If the solution is degenerated to the identity function,
then the byte-level errors should be near 0 regardless of the mutation probability. Figure 5 shows the results in these 2 ways of computing errors, and the result strongly indicates that the model has not degenerated to learning the identity function.
It is worth noting that the errors with respect to the groundtruth samples in figure 5 also demonstrate that our model lacks the ability to denoise mutated samples. This can be seen from the phenomenon that the errors for each mutation probability is higher than the reference diagonal value, instead of lower. This is due to the lack of a denoising criterion in our training process.
4.4 SAMPLE LENGTH
We also conducted experiments to show how does the bytelevel errors vary with respect to the sample length. Figure 7 shows the histogram of sample lengths for all datasets. It indicates that a majority of paragraph samples can be well modeled under 1024 bytes. Figure 6 shows the byte-level error of our models with respect to the length of samples. This figure is produced by testing 1,000,000 samples from each of training and testing subsets of enwiki dataset. Each bin in the histogram represent a range of 64 with the indicated upper limit. For example, the error at 512 indicate errors aggregated for samples of length 449 to 512.
One interesting phenomenon is that the errors are highly correlated with the number of recursion groups applied for both the encoder and the decoder. In the plot, bins 64, 128, 192- 256, 320-512, 576-1024 represent recursion levels of 4, 5, 6, 7, 8 respectively. The errors for the same recursion level are almost the same to each other, despite huge length differences when the recursion levels get deep. The reason for this is also related to the fact that there there tend to be more shorter texts than longer ones in the dataset, as evidenced in figure 7.
4.5 POOLING LAYERS
This section details an experiment in studying how do the training and testing errors vary with the choice of pooling layers in the encoder network. The experiments are conducted on the aforementioned model with n = 8, and replacing the max-pooling layer in the encoder with average-pooling or L2-pooling layers. Table 4 details the result. The numbers strongly indicate that max-pooling is the best choice. Max-pooling selects the largest values in its field of view, helping the network to achieve better optima (Boureau et al., 2010).
4.6 RECURSION
Table 6: Byte-level errors depending on model depth
n DEPTH TRAIN TEST
2 40 9.05% 9.07% 4 80 5.07% 5.11% 8 160 3.34% 3.34% 16 320 2.91% 2.92%
The use of recursion in the proposed model is from a linguistic intuition that the structure may help the model to learn better representations. However, there is to guarantee that such intuition could be helpful for the model unless comparison is done with a static model that takes fixed-length inputs and pass through a network with the same architecture of the recursion groups without weight sharing.
Figure 8 shows the training and testing errors when training a static model with the same hyper-parameters. The static model takes 1024 bytes, and zero vectors are padded if the sample length is smaller. The recursion group is therefore applied for 8 times in both the encoder and decoder, albeit their weights are not shared. The result indicates that a recursive model not only learns faster, but can also achieve better results. Table 5 lists the byte-level errors.
4.7 MODEL DEPTH
This section explores whether varying the model size can make a different on the result. Table 6 lists the training and testing errors of different model depths with n ∈ {2, 4, 8, 16}. The result indicates that best error rates are achieved with the largest model, with very little overfitting. This is partly due to the fact that our datasets are quite large for the models in question.
5 CONCLUSION
In this article, we propose to auto-encode text using a recursive convolutional network. The model contains 6 parts – 3 for the encoder and 3 for the decoder. The encoder and decoder both contain a prefix module group and a postfix module group for feature transformation. A recusion module group is included in between the prefix and postfix for each of the encoder and decoder, which recursively shrink or expand the length of representation. As a result, our model essentially generate text in a non-sequential fashion.
Experiments using this model are done on 6 large scale datasets in Arabic, Chinese and English. Comparison with recurrent networks is offered to show that our model achieved great results in text auto-encoding. Properties of the proposed model are studied, including its ability to produce the end-of-sequence symbol, whether the model degenerates to the identity function, and variations of pooling layers, recursion and depth of models. In the future, we hope to extend our models to nonsequential generative models without inputs, and use it for more sequence-to-sequence tasks such as machine translation. | 1. How does the paper aim to illustrate the representation learning ability of the convolutional autoencoder with residual connections?
2. What are the strengths and weaknesses of the proposed architecture compared to an LSTM?
3. What kind of minimal preprocessing is done on the text, and how is the space character encoded?
4. Why was the encoded dimension fixed at 1024, and what is the definition of a sample here?
5. Can the authors provide more clarification on the comparisons between Table 2 and Table 3, particularly regarding the performance difference between the LSTM and the proposed convolutional autoencoder?
6. How do the results change for different subset selections of training and test samples, and will Fig. 7 and Fig. 6 still hold?
7. What is the x-axis in Fig. 8, and can the authors label axes on all figures?
8. Would a final result on the complete dataset be useful to illustrate if the model learns well with lots of data, and can the authors provide a table showing generated sample text to clarify the power of the model?
9. With the results presented, how can we determine what exactly the model learns and why? | Review | Review
The paper aims to illustrated the representation learning ability of the convolutional autoencoder with residual connections is proposed by to encode text at the byte level. The authors apply the proposed architecture to 3 languages and run comparisons with an LSTM. Experimental results with different perturbation of samples, pooling layers, and sample lengths are presented.
The writing is fairly clear, however the presentation of tables and figures could be done better, for example, Fig. 2 is referred to in page 3, Table 2 which contains results is referred to on page 5, Fig 4 is referred to in page 6 and appears in page 5, etc.
What kind of minimal preprocessing is done on the text? Are punctuations removed? Is casing retained? How is the space character encoded?
Why was the encoded dimension always fixed at 1024? What is the definition of a sample here?
The description of the various data sets could be moved to a table/Appendix, particularly since most of the results are presented on the enwiki dataset, which would lead to better readability of the paper. Also results are presented only on a random 1M sample selected from these data sets, so the need for this whole page goes away.
Comparing Table 2 and Table 3, the LSTM is at 67% error on the test set while the proposed convolutional autoencoder is at 3.34%. Are these numbers on the same test set? While the argument that the LSTM does not generalize well due to the inherent memory learnt is reasonable, the differences in performance cannot be explained away with this. Can you please clarify this further?
It appears that the byte error shoot up for sequences of length 512+ (fig. 6 and fig. 7) and seems entirely correlated with the amount of data than recursion levels.
How do you expect these results to change for a different subset selection of training and test samples? Will Fig. 7 and Fig. 6 still hold?
In Fig, 8, unless the static train and test error are exactly on top of the recursive errors, they are not visible. What is the x-axis in Fig. 8? Please also label axes on all figures.
While the datasets are large and would take a lot of time to process for each case study, a final result on the complete data set, to illustrate if the model does learn well with lots of data would have been useful. A table showing generated sample text would also clarify the power of the model.
With the results presented, with a single parameter setting, its hard to determine what exactly the model learns and why. |
ICLR | Title
On Universal Equivariant Set Networks
Abstract
Using deep neural networks that are either invariant or equivariant to permutations in order to learn functions on unordered sets has become prevalent. The most popular, basic models are DeepSets (Zaheer et al., 2017) and PointNet (Qi et al., 2017). While known to be universal for approximating invariant functions, DeepSets and PointNet are not known to be universal when approximating equivariant set functions. On the other hand, several recent equivariant set architectures have been proven equivariant universal (Sannai et al., 2019; Keriven & Peyré, 2019), however these models either use layers that are not permutation equivariant (in the standard sense) and/or use higher order tensor variables which are less practical. There is, therefore, a gap in understanding the universality of popular equivariant set models versus theoretical ones. In this paper we close this gap by proving that: (i) PointNet is not equivariant universal; and (ii) adding a single linear transmission layer makes PointNet universal. We call this architecture PointNetST and argue it is the simplest permutation equivariant universal model known to date. Another consequence is that DeepSets is universal, and also PointNetSeg, a popular point cloud segmentation network (used e.g., in (Qi et al., 2017)) is universal. The key theoretical tool used to prove the above results is an explicit characterization of all permutation equivariant polynomial layers. Lastly, we provide numerical experiments validating the theoretical results and comparing different permutation equivariant models.
1 INTRODUCTION
Many interesting tasks in machine learning can be described by functions F that take as input a set, X = (x1, . . . ,xn), and output some per-element features or values, F (X) = (F (X)1, . . . ,F (X)n). Permutation equivariance is the property required of F so it is welldefined. Namely, it assures that reshuffling the elements in X and applying F results in the same output, reshuffled in the same manner. For example, if X̃ = (x2,x1,x3, . . . ,xn) then F (X̃) = (F (X)2,F (X)1,F (X)3, . . . ,F (X)n).
Building neural networks that are permutation equivariant by construction proved extremely useful in practice. Arguably the most popular models are DeepSets Zaheer et al. (2017) and PointNet Qi et al. (2017). These models enjoy small number of parameters, low memory footprint and computational efficiency along with high empirical expressiveness. Although both DeepSets and PointNet are known to be invariant universal (i.e., can approximate arbitrary invariant continuous functions) they are not known to be equivariant universal (i.e., can approximate arbitrary equivariant continuous functions).
On the other hand, several researchers have suggested theoretical permutation equivariant models and proved they are equivariant universal. Sannai et al. (2019) builds a universal equivariant network by taking n copies of (n− 1)-invariant networks and combines them with a layer that is not permutation invariant in the standard (above mentioned) sense. Keriven & Peyré (2019) solves a more general problem of building networks that are equivariant universal over arbitrary high order input tensors Rnd (including graphs); their construction, however, uses higher order tensors as hidden variables
which is of less practical value. Yarotsky (2018) proves that neural networks constructed using a finite set of invariant and equivariant polynomial layers are also equivariant universal, however his network is not explicit (i.e., the polynomials are not characterized for the equivariant case) and also of less practical interest due to the high degree polynomial layers.
In this paper we close the gap between the practical and theoretical permutation equivariant constructions and prove: Theorem 1.
(i) PointNet is not equivariant universal.
(ii) Adding a single linear transmission layer (i.e.,X 7→ 11TX) to PointNet makes it equivariant universal.
(iii) Using ReLU activation the minimal width required for universal permutation equivariant network satisfies ω ≤ kout + kin + ( n+kin kin ) .
This theorem suggests that, arguably, PointNet with an addition of a single linear layer is the simplest universal equivariant network, able to learn arbitrary continuous equivariant functions of sets. An immediate corollary of this theorem is Corollary 1. DeepSets and PointNetSeg are universal.
PointNetSeg is a network used often for point cloud segmentation (e.g., in Qi et al. (2017)). One of the benefit of our result is that it provides a simple characterization of universal equivariant architectures that can be used in the network design process to guarantee universality.
The theoretical tool used for the proof of Theorem 1 is an explicit characterization of the permutation equivariant polynomials over sets of vectors in Rk using power-sum multi-symmetric polynomials. We prove: Theorem 2. Let P : Rn×k → Rn×l be a permutation equivariant polynomial map. Then,
P (X) = ∑ |α|≤n bαq T α , (1)
where bα = (xα1 , . . . ,x α n) T , qα = (qα,1, . . . , qα,l)T , where qα,j = qα,j(s1, . . . , st), t = ( n+k k ) , are
polynomials; sj(X) = ∑n i=1 x αj i are the power-sum multi-symmetric polynomials. On the other hand every polynomial map P satisfying Equation 1 is equivariant.
This theorem, which extends Proposition 2.27 in Golubitsky & Stewart (2002) to sets of vectors using multivariate polynomials, lends itself to expressing arbitrary equivariant polynomials as a composition of entry-wise continuous functions and a single linear transmission, which in turn facilitates the proof of Theorem 1.
We conclude the paper by numerical experiments validating the theoretical results and testing several permutation equivariant networks for the tasks of set classification and regression.
2 PRELIMINARIES
Equivariant maps. Vectors x ∈ Rk are by default column vectors; 0,1 are the all zero and all one vectors/tensors; ei is the i-th standard basis vector; I is the identity matrix; all dimensions are inferred from context or mentioned explicitly. We represent a set of n vectors in Rk as a matrixX ∈ Rn×k and denoteX = (x1,x2, . . . ,xn)T , where xi ∈ Rk, i ∈ [n], are the columns ofX . We denote by Sn the permutation group of [n]; its action onX is defined by σ ·X = (xσ−1(1),xσ−1(2), . . . ,xσ−1(n))T , σ ∈ Sn. That is, σ is reshuffling the rows of X . The natural class of maps assigning a value or feature vector to every element in an input set is permutation equivariant maps: Definition 1. A map F : Rn×k → Rn×l satisfying F (σ ·X) = σ · F (X) for all σ ∈ Sn and X ∈ Rn×d is called permutation equivariant.
Power-sum multi-symmetric polynomials. Given a vector z = (z1, . . . , zn) ∈ Rn the powersum symmetric polynomials sj(z) = ∑n i=1 z j i , with j ∈ [n], uniquely characterize z up to permuting
its entries. In other words, for z,y ∈ Rn we have y = σ · z for some σ ∈ Sn if and only if sj(y) = sj(z) for all j ∈ [n]. An equivalent property is that every Sn invariant polynomial p can be expressed as a polynomial in the power-sum symmetric polynomials, i.e., p(z) = q(s1(z), . . . , sn(z)), see Rydh (2007) Corollary 8.4, Briand (2004) Theorem 3. This fact was previously used in Zaheer et al. (2017) to prove that DeepSets is universal for invariant functions. We extend this result to equivariant functions and the multi-feature (sets of vectors) case.
For a vector x ∈ Rk and a multi-index vector α = (α1, . . . , αk) ∈ Nk we define xα = xα11 · · ·x αk k , and |α| = ∑ i∈[k] αi. A generalization of the power-sum symmetric polynomials to matrices exists and is called power-sum multi-symmetric polynomials, defined with a bit of notation abuse: sα(X) = ∑n i=1 x α i , where α ∈ Nk is a multi-index satisfying |α| ≤ n. Note that the number of
power-sum multi-symmetric polynomials acting onX ∈ Rn×k is t = ( n+k k ) . For notation simplicity let α1, . . . , αt be a list of all α ∈ Nk with |α| ≤ n. Then we index the collection of power-sum multi-symmetric polynomials as s1, . . . , st.
Similarly to the vector case the numbers sj(X), j ∈ [t] characterize X up to permutation of its rows. That is Y = σ · X for some σ ∈ Sn iff sj(Y ) = sj(Y ) for all j ∈ [t]. Furthermore, every Sn invariant polynomial p : Rn×k → R can be expressed as a polynomial in the power-sum multi-symmetric polynomials (see (Rydh, 2007) corollary 8.4), i.e.,
p(X) = q(s1(X), . . . , st(X)), (2)
These polynomials were recently used to encode multi-sets in Maron et al. (2019).
3 EQUIVARIANT MULTI-SYMMETRIC POLYNOMIAL LAYERS
In this section we develop the main theoretical tool of this paper, namely, a characterization of all permutation equivariant polynomial layers. As far as we know, these layers were not fully characterized before.
Theorem 2 provides an explicit representation of arbitrary permutation equivariant polynomial maps P : Rn×k → Rn×l using the basis of power-sum multi-symmetric polynomials, si(X). The particular use of power-sum polynomials si(X) has the advantage it can be encoded efficiently using a neural network: as we will show si(X) can be approximated using a PointNet with a single linear transmission layer. This allows approximating an arbitrary equivariant polynomial map using PointNet with a single linear transmission layer.
A version of this theorem for vectors instead of matrices (i.e., the case of k = 1) appears as Proposition 2.27 in Golubitsky & Stewart (2002); we extend their proof to matrices, which is the relevant scenario for ML applications as it allows working with sets of vectors. For k = 1 Theorem 2 reduces to the following form: p(x)i = ∑ a≤n pa(s1(x), . . . , sn(x))x a i with sj(x) = ∑ i x j i . For matrices the monomial xki is replaced by x α i for a multi-index α and the power-sum symmetric polynomials are replaced by the power-sum multi-symmetric polynomials.
First, note that it is enough to prove Theorem 1 for l = 1 and apply it to every column of P . Hence, we deal with a vector of polynomials p : Rn×k → Rn and need to prove it can be expressed as p = ∑ |α|≤n bαqα, for Sn invariant polynomial qα.
Given a polynomial p(X) and the cyclic permutation σ−1 = (123 · · ·n) the following operation, taking a polynomial to a vector of polynomials, is useful in characterizing equivariant polynomial maps:
dpe(X) = p(X) p(σ ·X) p(σ2 ·X)
... p(σn−1 ·X)
(3)
Theorem 2 will be proved using the following two lemmas:
Lemma 1. Let p : Rn×k → Rn be an equivariant polynomial map. Then, there exists a polynomial p : Rn×k → R, invariant to Sn−1 (permuting the last n− 1 rows ofX) so that p = dpe.
Proof. Equivariance of p means that for all σ ∈ Sn it holds that σ · p(X) = p(σ ·X)
σ · p(X) = p(σ ·X). (4)
Choosing an arbitrary permutation σ ∈ stab(1) < Sn, namely a permutation satisfying σ(1) = 1, and observing the first row in Equation 4 we get p1(X) = p1(σ ·X) = p1(x1,xσ−1(2), . . . ,xσ−1(n)). Since this is true for all σ ∈ stab(1) p1 is Sn−1 invariant. Next, applying σ = (1i) to Equation 4 and observing the first row again we get pi(X) = p1(xi, . . . ,x1, . . .). Using the invariance of p1 to Sn−1 we get p = dp1e.
Lemma 2. Let p : Rn×k → R be a polynomial invariant to Sn−1 (permuting the last n− 1 rows of X) then
p(X) = ∑ |α|≤n xα1 qα(X), (5)
where qα are Sn invariant.
Proof. Expanding p with respect to x1 we get p(X) = ∑ |α|≤m xα1 pα(x2, . . . ,xn), (6)
for some m ∈ N. We first claim pα are Sn−1 invariant. Indeed, note that if p(X) = p(x1,x2, . . . ,xn) is Sn−1 invariant, i.e., invariant to permutations of x2, . . . ,xn, then also its derivatives ∂ |β|
∂xβ1 p(X) are Sn−1 permutation invariant, for all β ∈ Nk. Taking the derivative ∂|β|/∂xβ1 on
both sides of Equation 6 we get that pβ is Sn−1 equivariant.
For brevity denote p = pα. Since p is Sn−1 invariant it can be expressed as a polynomial in the powersum multi-symmetric polynomials, i.e., p(x2, . . . ,xn) = r(s1(x2, . . . ,xn), . . . , st(x2, . . . ,xn)). Note that si(x2, . . . ,xn) = si(X)− xαi1 and therefore
p(x2, . . . ,xn) = r(s1(X)− xα11 , . . . , st(X)− x αt 1 ).
Since r is a polynomial, expanding its monomials in si(X) and xα1 shows p can be expressed as p = ∑ |α|≤m′ x α 1 p̃α, where m
′ ∈ N, and p̃α are Sn invariant (as multiplication of invariant Sn polynomials si(X)). Plugging this in Equation 6 we get Equation 5, possibly with the sum over some n’ > n. It remains to show n′ can be taken to be at-most n. This is proved in Corollary 5 in Briand (2004)
Proof. (Theorem 2) Given an equivariant p as above, use Lemma 1 to write p = dpe where p(X) is invariant to permuting the last n− 1 rows ofX . Use Lemma 2 to write p(X) = ∑ |α|≤n x α 1 qα(X), where qα are Sn invariant. We get,
p = dpe = ∑ |α|≤n bαqα.
The converse direction is immediate after noting that bα are equivariant and qα are invariant.
4 UNIVERSALITY OF SET EQUIVARIANT NEURAL NETWORKS
We consider equivariant deep neural networks f : Rn×kin → Rn×kout ,
F (X) = Lm ◦ ν ◦ · · · ◦ ν ◦L1(X), (7)
where Li : Rn×ki → Rn×ki+1 are affine equivariant transformations, and ν is an entry-wise nonlinearity (e.g., ReLU). We define the width of the network to be ω = maxi ki; note that this definition is different from the one used for standard MLP where the width would be nω, see e.g., Hanin & Sellke (2017). Zaheer et al. (2017) proved that affine equivariant Li are of the form
Li(X) =XA+ 1
n 11TXB + 1cT , (8)
where A,B ∈ Rki×ki+1 , and c ∈ Rki+1 are the layer’s trainable parameters; we call the linear transformationX 7→ 1n11 TXB a linear transmission layer.
We now define the equivariant models considered in this paper: The DeepSets (Zaheer et al., 2017) architecture is Equation 7 with the choice of layers as in Equation 8. Taking B = 0 in all layers is the PointNet architecture (Qi et al., 2017). PointNetST is an equivariant model of the form Equation 7 with layers as in Equation 8 where only a single layer Li has a non-zero B. The PointNetSeg (Qi et al., 2017) architecture is PointNet composed with an invariant max layer, namely max(F (X))j = maxi∈[n] F (X)i,j and then concatenating it with the inputX , i.e., [X,1 max(F (X))], and feeding is as input to another PointNetG, that isG([X,1 max(F (X))]).
We will prove PointNetST is permutation equivariant universal and therefore arguably the simplest permutation equivariant universal model known to date.
Universality of equivariant deep networks is defined next.
Definition 2. Permutation equivariant universality1 of a model F : Rn×kin → Rn×kout means that for every permutation equivariant continuous function H : Rn×kin → Rn×kout defined over the cube K = [0, 1]n×kin ⊂ Rn×kin , and > 0 there exists a choice of m (i.e., network depth), ki (i.e., network width) and the trainable parameters of F so that ‖H(X)− F (X)‖∞ < for allX ∈ K.
Proof. (Theorem 1) Fact (i), namely that PointNet is not equivariant universal is a consequence of the following simple lemma:
Lemma 3. Let h = (h1, . . . , hn)T : Rn → Rn be the equivariant linear function defined by h(x) = 11Tx. There is no f : R→ R so that |hi(x)− f(xi)| < 12 for all i ∈ [n] and x ∈ [0, 1] n.
Proof. Assume such f exists. Let e1 = (1, 0, . . . , 0)T ∈ Rn. Then,
1 = |h2(e1)− h2(0)| ≤ |h2(e1)− f(0)|+ |f(0)− h2(0)| < 1
reaching a contradiction.
To prove (ii) we first reduce the problem from the class of all continuous equivariant functions to the class of equivariant polynomials. This is justified by the following lemma.
Lemma 4. Equivariant polynomials P : Rn×kin → Rn×kout are dense in the space of continuous equivariant functions F : Rn×kin → Rn×kout over the cube K.
Proof. Take an arbitrary > 0. Consider the function fij : Rn×kin → R, which denotes the (i, j)-th output entry of F . By the Stone-Weierstrass Theorem there exists a polynomial pij : Rn×kin → R such that ‖fij(X)− pij(X)‖∞ ≤ for allX ∈ K. Consider the polynomial map P : R
n×kin → Rn×kout defined by (P )ij = pij . P is in general not equivariant. To finish the proof we will symmetrize P :∥∥∥∥∥F (X)− 1n! ∑
σ∈Sn
σ · P (σ−1 ·X) ∥∥∥∥∥ ∞ = ∥∥∥∥∥ 1n! ∑ σ∈Sn σ · F (σ−1 ·X)− 1 n! ∑ σ∈Sn σ · P (σ−1 ·X) ∥∥∥∥∥ ∞
= ∥∥∥∥∥ 1n! ∑ σ∈Sn σ · ( F (σ−1 ·X)− P (σ−1 ·X) )∥∥∥∥∥ ∞ ≤ 1 n! ∑ σ∈Sn = ,
where in the first equality we used the fact that F is equivariant. This concludes the proof since∑ σ∈Sn σ · P (σ −1 ·X) is an equivariant polynomial map.
Now, according to Theorem 2 an arbitrary equivariant polynomial P : Rn×kin → Rn×kout can be written as P = ∑ |α|≤n bα(X)qα(X) T , where bα(X) = dxα1 e ∈ Rn and qα =
1Or just equivariant universal in short.
(qα,1, . . . , qα,kout) ∈ Rkout are invariant polynomials. Remember that every Sn invariant polynomial can be expressed as a polynomial in the t = ( n+kin kin ) power-sum multi-symmetric polynomials sj(X) = 1 n ∑n i=1 x αj i , j ∈ [t] (we use the 1/n normalized version for a bit more simplicity later on). We can therefore write P as composition of three maps:
P = Q ◦L ◦B, (9)
whereB : Rn×kin → Rn×t is defined by
B(X) = (b(x1), . . . , b(xn)) T ,
b(x) = (xα1 , . . . ,xαt); L is defined as in Equation 8 with B = [0, I] and A = [e1, . . . , ekin ,0], where I ∈ Rt×t the identity matrix and ei ∈ Rt represents the standard basis (as-usual). We assume αj = ej ∈ Rkin , for j ∈ [kin]. Note that the output of L is of the form
L(B(X)) = (X,1s1(X),1s2(X), . . . ,1st(X)).
Finally,Q : Rn×(kin+t) → Rn×kout is defined by
Q(X,1s1, . . . ,1st) = (q(x1, s1, . . . , st), . . . , q(xn, s1, . . . , st)) T , and q(x, s1, . . . , st) = ∑ |α|≤n x αqα(s1, . . . , st) T .
The decomposition in Equation 9 of P suggests that replacing Q,B with Multi-Layer Perceptrons (MLPs) would lead to a universal permutation equivariant network consisting of PointNet with a single linear transmission layer, namely PointNetST.
The F approximating P will be defined as
F = Ψ ◦L ◦Φ, (10)
where Φ : Rn×kin → Rn×t and Ψ : Rn×(t+kin) → Rn×kout are both of PointNet architecture, namely there exist MLPs φ : Rkin → Rt and ψ : Rt+kin →
Rkout so that Φ(X) = (φ(x1), . . . ,φ(xn))T and Ψ(X) = (ψ(x1), . . . ,ψ(xn))T . See Figure 1 for an illustration of F .
To build the MLPs φ ,ψ we will first construct ψ to approximate q, that is, we use the universality of MLPS (see (Hornik, 1991; Sonoda & Murata, 2017; Hanin & Sellke, 2017)) to construct ψ so that ‖ψ(x, s1, . . . , st)− q(x, s1, . . . , st)‖∞ < 2 for all (x, s1, . . . , st) ∈ [0, 1]
kin+t. Furthermore, as ψ over [0, 1]kin+t is uniformly continuous, let δ be such that if z, z′ ∈ [0, 1]kin+t, ‖z − z′‖∞ < δ then ‖ψ(z)−ψ(z′)‖∞ < 2 . Now, we use universality again to construct φ approximating b, that is we take φ so that ‖φ(x)− b(x)‖∞ < δ for all x ∈ [0, 1]kin .
‖F (X)− P (X)‖∞ ≤ ‖Ψ(L(Φ(X)))−Ψ(L(B(X)))‖∞ + ‖Ψ(L(B(X)))−Q(L(B(X)))‖∞ = err1 + err2
First, ‖L(Φ(X))−L(B(X))‖∞ < δ for allX ∈ K and therefore err1 < 2 . Second, note that if X ∈ K then B(X) ∈ [0, 1]n×t and L(B(X)) ∈ [0, 1]n×(kin+t). Therefore by construction of ψ we have err2 < 2 .
To prove (iii) we use the result in Hanin & Sellke (2017) (see Theorem 1) bounding the width of an MLP approximating a function f : [0, 1]din → Rdout by din + dout. Therefore, the width of the MLP φ is bounded by kin + t, where the width of the MLP ψ is bounded by t+ kin + kout, proving the bound.
We can now prove Cororllary 1.
Proof. (Corollary 1)
The fact that the DeepSets model is equivariant universal is immediate. Indeed, The PointNetST model can be obtained from the DeepSets model by settingB = 0 in all but one layer, withB as in Equation 8.
For the PointNetSeg model note that by Theorem 1 in Qi et al. (2017) every invariant function f : Rn×kin → Rt can be approximated by a network of the form Ψ(max(F (X))), where (F (X) is a PointNet model and Ψ is an MLP. In particular, for every ε > 0 there exists such F ,Φ for which ‖Ψ(max(F (X))) − (s1(X), . . . , st(X))‖∞ < ε for every X ∈ [0, 1]n×kin where s1(X), . . . , st(X) are the power-sum multi-symmetric polynomials. It follows that we can use PointNetSeg to approximate 1(s1(X), . . . , st(X)). The rest of the proof closely resembles the proof of Theorem 1.
Graph neural networks with constructed adjacency. One approach sometimes applied to learning from sets of vectors is to define an adjacency matrix (e.g., by thresholding distances of node feature vectors) and apply a graph neural network to the resulting graph (e.g., Wang et al. (2019), Li et al. (2019)). Using the common message passing paradigm (Gilmer et al., 2017) in this case boils to layers of the form: L(X)i = ψ(xi, ∑ j∈Ni φ(xi,xj)) = ψ(xi, ∑ j∈[n]N(xi,xj)φ(xi,xj)), where φ, ψ are MLPs, Ni is the index set of neighbors of node i, and N(xi,xj) is the indicator function for the edge (i, j). If N can be approximated by a continuous function, which is the case at least in the L2 sense for a finite set of vectors, then since L is also equivariant it follows from Theorem 1 that such a network can be approximated (again, at least in L2 norm) arbitrarily well by any universal equivariant network such as PointNetST or DeepSets.
We tested the ability of a DeepSets model with varying depth and width to approximate a single graph convolution layer. We found that a DeepSets model with a small number of layers can approximate a graph convolution layer reasonably well. For details see Appendix A.
5 EXPERIMENTS
We conducted experiments in order to validate our theoretical observations. We compared the results of several equivariant models, as well as baseline (full) MLP, on three equivariant learning tasks: a classification task (knapsack) and two regression tasks (squared norm and Fiedler eigen vector). For all tasks we compare results of 7 different models: DeepSets, PointNet, PointNetSeg, PointNetST, PointNetQT and GraphNet. PointNetQT is PointNet with a single quadratic equivariant transmission layer as defined in Appendix B. GraphNet is similar to the graph convolution network in Kipf & Welling (2016) and is defined explicitly in Appendix B. We generated the adjacency matrices for GraphNet by taking 10 nearest neighbors of each set element. In all experiments we used a network of the form Equation 7 with m = 6 depth and varying width, fixed across all layers. 2
Equivariant classification. For classification, we chose to learn the multidimensional knapsack problem, which is known to be NP-hard. We are given a set of 4-vectors, represented byX ∈ Rn×4, and our goal is to learn the equivariant classification function f : Rn×4 → {0, 1}n defined by the following optimization problem:
f(X) = argmax z n∑ i=1 xi1zi
s.t. n∑ i=1 xijzi ≤ wj , j = 2, 3, 4
zi ∈ {0, 1} , i ∈ [n]
Intuitively, given a set of vectorsX ∈ Rn×4, (X)ij = xij , where each row represents an element in a set, our goal is to find a subset maximizing the value while satisfying budget constraints. The first column ofX defines the value of each element, and the three other columns the costs.
To evaluate the success of a trained model we record the percentage of sets for which the predicted subset is such that all the budget constrains are satisfied and the value is within 10% of the optimal value. In Appendix C we detail how we generated this dataset.
Equivariant regression. The first equivariant function we considered for regression is the function f(X) = 1 ∑n i=1 ∑k j=1(Xi,j − 1 2 ) 2. Hanin & Sellke (2017) showed this function cannot be approxi-
2The code can be found at https://github.com/NimrodSegol/On-Universal-Equivariant-Set-Networks
Knapsack test Fiedler test ∑ x∈X(x− 1 2 ) 2 test
mated by MLPs of small width. We drew 10k training examples and 1k test examples i.i.d. from a N ( 12 , 1) distribution (per entry ofX).
The second equivariant function we considered is defined on point cloudsX ∈ Rn×3. For each point cloud we computed a graph by connecting every point to its 10 nearest neighbors. We then computed the absolute value of the first non trivial eigenvector of the graph Laplacian. We used the ModelNet dataset (Wu et al., 2015) which contains ∼ 9k training meshes and ∼ 2k test meshes. The point clouds are generated by randomly sampling 512 points from each mesh.
Result summary. Figure 2 summarizes train and test accuracy of the 6 models after training (training details in Appendix C) as a function of the network width ω. We have tested 15 ω values equidistant in [5, nkin2 ].
As can be seen in the graphs, in all three datasets the equivariant universal models (PointNetST, PointNetQT , DeepSets, PointNetSeg) achieved comparable accuracy. PointNet, which is not equivariant universal, consistently achieved inferior performance compared to the universal models, as expected by the theory developed in this paper. The non-equivariant MLP, although universal, used the same width (i.e., same number of parameters) as the equivariant models and was able to over-fit only on one train set (the quadratic function); its performance on the test sets was inferior by a large margin to the equivariant models. We also note that in general the GraphNet model achieved comparable results to the equivariant universal models but was still outperformed by the DeepSets model.
An interesting point is that although the width used in the experiments in much smaller than the bound kout + kin + ( n+kin kin ) established by Theorem 1, the universal models are still able to learn well the functions we tested on. This raises the question of the tightness of this bound, which we leave to future work.
6 CONCLUSIONS
In this paper we analyze several set equivariant neural networks and compare their approximation power. We show that while vanilla PointNet (Qi et al., 2017) is not equivariant universal, adding a single linear transmission layer makes it equivariant universal. Our proof strategy is based on a characterization of polynomial equivariant functions. As a corollary we show that the DeepSets model
(Zaheer et al., 2017) and PointNetSeg (Qi et al., 2017) are equivariant universal. Experimentally, we tested the different models on several classification and regression tasks finding that adding a single linear transmitting layer to PointNet makes a significant positive impact on performance.
7 ACKNOWLEDGEMENTS
This research was supported in part by the European Research Council (ERC Consolidator Grant, LiftMatch 771136) and the Israel Science Foundation (Grant No. 1830/17).
A APPROXIMATING GRAPH CONVOLUTION LAYER WITH DEEPSETS
To test the ability of an equivariant universal model to approximate a graph convolution layer, we conducted an experiment where we applied a single graph convolution layer (see Appendix B for a full description of the graph convolution layers used in this paper) with 3 in features and 10 out features. We constructed a knn graph by taking 10 neighbors. We sampled 1000 examples in R100×3 i.i.d from a N ( 12 , 1) distribution (per entry of X). The results are summarized in Figure 3. We regressed to the output of a graph convolution layer using the smooth L1 loss.
B DESCRIPTION OF LAYERS
B.1 QUADRATIC LAYER
One potential application of Theorem 2 is augmenting an equivariant neural network (Equation 7) with equivariant polynomial layers P : Rn×k → Rn×l of some maximal degree d. This can be done in the following way: look for all solutions to α, β1, β2, . . . ∈ Nk so that |α| + ∑ i |βi| ≤ d. Any
solution to this equation will give a basis element of the form p(X) = dxα1 e ∏ j (∑n i=1 x βj i ) .
In the paper we tested PointNetQT, an architecture that adds to PointNet a single quadratic equivariant layer. We opted to use only the quadratic transmission operators: For a matrixX ∈ Rn×k we define L(X) ∈ Rn×k as follows:
L(X) =XW1 + 11 TXW2 + (11 TX) (11TX)W3 + (X X)W4 + (11TX) XW5,
where is a point-wise multiplication andWi ∈ Rn×k, i ∈ [5] are the learnable parameters.
B.2 GRAPH CONVOLUTION LAYER
We implement a graph convolution layers as follows
L(X) = BXW2 +XW1 + 1c T
with W1,W2, c learnable parameters. The matrix B is defined as in Kipf & Welling (2016) from the knn graph of the set. B =D− 1 2AD− 1 2 whereD is the degree matrix of the graph andA is the adjacency matrix of the graph with added self-connections.
C IMPLEMENTATION DETAILS
Knapsack data generation. We constructed a dataset of 10k training examples and 1k test examples consisting of 50× 4 matrices. We took w1 = 100, w2 = 80, w3 = 50. To generateX ∈ R50×4, we draw an integer uniformly at random between 1 and 100 and randomly choose 50 integers between 1 as the first column ofX . We also randomly chose an integer between 1 and 25 and then randomly chose 150 integers in that range as the three last columns ofX . The labels for each inputX were computed by a standard dynamic programming approach, see Martello & Toth (1990).
Optimization. We implemented the experiments in Pytorch Paszke et al. (2017) with the Adam Kingma & Ba (2014) optimizer for learning. For the classification we used the cross entropy loss and trained for 150 epochs with learning rate 0.001, learning rate decay of 0.5 every 100 epochs and batch size 32. For the quadratic function regression we trained for 150 epochs with leaning rate of 0.001, learning rate decay 0.1 every 50 epochs and batch size 64; for the regression to the leading eigen vector we trained for 50 epochs with leaning rate of 0.001 and batch size 32. To regress to the output of a single graph convolution layer we trained for 200 epochs with leaning rate of 0.001 and batch size 32. | 1. What is the focus of the paper regarding set architecture and equivariance?
2. What are the strengths of the proposed approach, particularly in terms of theory and experiments?
3. What are the weaknesses of the paper regarding its clarity and reader friendliness?
4. How does the reviewer assess the significance of the results and their presentation in the paper?
5. Do you have any questions or notes for the authors regarding the content and organization of the paper? | Review | Review
*CAVEAT*
I must caveat that this paper is out of my comfort zone in terms of topic, so my review below should only be taken lightly. It also explaina the brevity of my review. My apologies to the authors and other reviewers.
*Paper summary*
The authors design a set architecture, which is equivariant to permutations on the input. They show the simplest such set architecture, which preserves equivariance, while being a universal approximator. Nicely this architecture relies on a correction to PointNet, called PointNetST, which they show is not equivariant universal. Furthermore, they run experiments on a few toy examples demonstrating that their system performs well.
*Paper decision*
I have decided to give this paper a weak accept, since it contains both theory and nice experiments. To change to a firm accept, I think the paper needs some changes in written style mainly, to make it friendlier to newcomers to the area, which can easily be implemented in the camera ready stage of paper preparation. For instance, the omission of a results discussion section or a conclusion are clearly not reader friendly.
*Supporting arguments*
- The paper is written clearly. This said, it requires a great deal of effort to follow the maths if you are not already fluent in a lot of the ideas used in the paper (this includes myself).
- I think the structure of the paper is fine for this sort of work. Perhaps at the beginning it would be more useful to spend more time on a roadmap of the results presented in the paper and to explain the exact significance of why the reader should want to continue reading.
- I think the selection of experiments is nice, containing both regression and classification. What would have been nicer would be to perform some sort of ablation study, where the authors studied how the representational capacity of the network changed as a result of them introducing the universal linear transmission layer.
- A direct theoretical and experimental comparison between PointNet and PointNetST would have been useful for me to understand the impact of the change that the authors introduce.
*Questions/notes for the authors*
- Please answer my concerns in the support arguments
- Where is the conclusion section? |
ICLR | Title
On Universal Equivariant Set Networks
Abstract
Using deep neural networks that are either invariant or equivariant to permutations in order to learn functions on unordered sets has become prevalent. The most popular, basic models are DeepSets (Zaheer et al., 2017) and PointNet (Qi et al., 2017). While known to be universal for approximating invariant functions, DeepSets and PointNet are not known to be universal when approximating equivariant set functions. On the other hand, several recent equivariant set architectures have been proven equivariant universal (Sannai et al., 2019; Keriven & Peyré, 2019), however these models either use layers that are not permutation equivariant (in the standard sense) and/or use higher order tensor variables which are less practical. There is, therefore, a gap in understanding the universality of popular equivariant set models versus theoretical ones. In this paper we close this gap by proving that: (i) PointNet is not equivariant universal; and (ii) adding a single linear transmission layer makes PointNet universal. We call this architecture PointNetST and argue it is the simplest permutation equivariant universal model known to date. Another consequence is that DeepSets is universal, and also PointNetSeg, a popular point cloud segmentation network (used e.g., in (Qi et al., 2017)) is universal. The key theoretical tool used to prove the above results is an explicit characterization of all permutation equivariant polynomial layers. Lastly, we provide numerical experiments validating the theoretical results and comparing different permutation equivariant models.
1 INTRODUCTION
Many interesting tasks in machine learning can be described by functions F that take as input a set, X = (x1, . . . ,xn), and output some per-element features or values, F (X) = (F (X)1, . . . ,F (X)n). Permutation equivariance is the property required of F so it is welldefined. Namely, it assures that reshuffling the elements in X and applying F results in the same output, reshuffled in the same manner. For example, if X̃ = (x2,x1,x3, . . . ,xn) then F (X̃) = (F (X)2,F (X)1,F (X)3, . . . ,F (X)n).
Building neural networks that are permutation equivariant by construction proved extremely useful in practice. Arguably the most popular models are DeepSets Zaheer et al. (2017) and PointNet Qi et al. (2017). These models enjoy small number of parameters, low memory footprint and computational efficiency along with high empirical expressiveness. Although both DeepSets and PointNet are known to be invariant universal (i.e., can approximate arbitrary invariant continuous functions) they are not known to be equivariant universal (i.e., can approximate arbitrary equivariant continuous functions).
On the other hand, several researchers have suggested theoretical permutation equivariant models and proved they are equivariant universal. Sannai et al. (2019) builds a universal equivariant network by taking n copies of (n− 1)-invariant networks and combines them with a layer that is not permutation invariant in the standard (above mentioned) sense. Keriven & Peyré (2019) solves a more general problem of building networks that are equivariant universal over arbitrary high order input tensors Rnd (including graphs); their construction, however, uses higher order tensors as hidden variables
which is of less practical value. Yarotsky (2018) proves that neural networks constructed using a finite set of invariant and equivariant polynomial layers are also equivariant universal, however his network is not explicit (i.e., the polynomials are not characterized for the equivariant case) and also of less practical interest due to the high degree polynomial layers.
In this paper we close the gap between the practical and theoretical permutation equivariant constructions and prove: Theorem 1.
(i) PointNet is not equivariant universal.
(ii) Adding a single linear transmission layer (i.e.,X 7→ 11TX) to PointNet makes it equivariant universal.
(iii) Using ReLU activation the minimal width required for universal permutation equivariant network satisfies ω ≤ kout + kin + ( n+kin kin ) .
This theorem suggests that, arguably, PointNet with an addition of a single linear layer is the simplest universal equivariant network, able to learn arbitrary continuous equivariant functions of sets. An immediate corollary of this theorem is Corollary 1. DeepSets and PointNetSeg are universal.
PointNetSeg is a network used often for point cloud segmentation (e.g., in Qi et al. (2017)). One of the benefit of our result is that it provides a simple characterization of universal equivariant architectures that can be used in the network design process to guarantee universality.
The theoretical tool used for the proof of Theorem 1 is an explicit characterization of the permutation equivariant polynomials over sets of vectors in Rk using power-sum multi-symmetric polynomials. We prove: Theorem 2. Let P : Rn×k → Rn×l be a permutation equivariant polynomial map. Then,
P (X) = ∑ |α|≤n bαq T α , (1)
where bα = (xα1 , . . . ,x α n) T , qα = (qα,1, . . . , qα,l)T , where qα,j = qα,j(s1, . . . , st), t = ( n+k k ) , are
polynomials; sj(X) = ∑n i=1 x αj i are the power-sum multi-symmetric polynomials. On the other hand every polynomial map P satisfying Equation 1 is equivariant.
This theorem, which extends Proposition 2.27 in Golubitsky & Stewart (2002) to sets of vectors using multivariate polynomials, lends itself to expressing arbitrary equivariant polynomials as a composition of entry-wise continuous functions and a single linear transmission, which in turn facilitates the proof of Theorem 1.
We conclude the paper by numerical experiments validating the theoretical results and testing several permutation equivariant networks for the tasks of set classification and regression.
2 PRELIMINARIES
Equivariant maps. Vectors x ∈ Rk are by default column vectors; 0,1 are the all zero and all one vectors/tensors; ei is the i-th standard basis vector; I is the identity matrix; all dimensions are inferred from context or mentioned explicitly. We represent a set of n vectors in Rk as a matrixX ∈ Rn×k and denoteX = (x1,x2, . . . ,xn)T , where xi ∈ Rk, i ∈ [n], are the columns ofX . We denote by Sn the permutation group of [n]; its action onX is defined by σ ·X = (xσ−1(1),xσ−1(2), . . . ,xσ−1(n))T , σ ∈ Sn. That is, σ is reshuffling the rows of X . The natural class of maps assigning a value or feature vector to every element in an input set is permutation equivariant maps: Definition 1. A map F : Rn×k → Rn×l satisfying F (σ ·X) = σ · F (X) for all σ ∈ Sn and X ∈ Rn×d is called permutation equivariant.
Power-sum multi-symmetric polynomials. Given a vector z = (z1, . . . , zn) ∈ Rn the powersum symmetric polynomials sj(z) = ∑n i=1 z j i , with j ∈ [n], uniquely characterize z up to permuting
its entries. In other words, for z,y ∈ Rn we have y = σ · z for some σ ∈ Sn if and only if sj(y) = sj(z) for all j ∈ [n]. An equivalent property is that every Sn invariant polynomial p can be expressed as a polynomial in the power-sum symmetric polynomials, i.e., p(z) = q(s1(z), . . . , sn(z)), see Rydh (2007) Corollary 8.4, Briand (2004) Theorem 3. This fact was previously used in Zaheer et al. (2017) to prove that DeepSets is universal for invariant functions. We extend this result to equivariant functions and the multi-feature (sets of vectors) case.
For a vector x ∈ Rk and a multi-index vector α = (α1, . . . , αk) ∈ Nk we define xα = xα11 · · ·x αk k , and |α| = ∑ i∈[k] αi. A generalization of the power-sum symmetric polynomials to matrices exists and is called power-sum multi-symmetric polynomials, defined with a bit of notation abuse: sα(X) = ∑n i=1 x α i , where α ∈ Nk is a multi-index satisfying |α| ≤ n. Note that the number of
power-sum multi-symmetric polynomials acting onX ∈ Rn×k is t = ( n+k k ) . For notation simplicity let α1, . . . , αt be a list of all α ∈ Nk with |α| ≤ n. Then we index the collection of power-sum multi-symmetric polynomials as s1, . . . , st.
Similarly to the vector case the numbers sj(X), j ∈ [t] characterize X up to permutation of its rows. That is Y = σ · X for some σ ∈ Sn iff sj(Y ) = sj(Y ) for all j ∈ [t]. Furthermore, every Sn invariant polynomial p : Rn×k → R can be expressed as a polynomial in the power-sum multi-symmetric polynomials (see (Rydh, 2007) corollary 8.4), i.e.,
p(X) = q(s1(X), . . . , st(X)), (2)
These polynomials were recently used to encode multi-sets in Maron et al. (2019).
3 EQUIVARIANT MULTI-SYMMETRIC POLYNOMIAL LAYERS
In this section we develop the main theoretical tool of this paper, namely, a characterization of all permutation equivariant polynomial layers. As far as we know, these layers were not fully characterized before.
Theorem 2 provides an explicit representation of arbitrary permutation equivariant polynomial maps P : Rn×k → Rn×l using the basis of power-sum multi-symmetric polynomials, si(X). The particular use of power-sum polynomials si(X) has the advantage it can be encoded efficiently using a neural network: as we will show si(X) can be approximated using a PointNet with a single linear transmission layer. This allows approximating an arbitrary equivariant polynomial map using PointNet with a single linear transmission layer.
A version of this theorem for vectors instead of matrices (i.e., the case of k = 1) appears as Proposition 2.27 in Golubitsky & Stewart (2002); we extend their proof to matrices, which is the relevant scenario for ML applications as it allows working with sets of vectors. For k = 1 Theorem 2 reduces to the following form: p(x)i = ∑ a≤n pa(s1(x), . . . , sn(x))x a i with sj(x) = ∑ i x j i . For matrices the monomial xki is replaced by x α i for a multi-index α and the power-sum symmetric polynomials are replaced by the power-sum multi-symmetric polynomials.
First, note that it is enough to prove Theorem 1 for l = 1 and apply it to every column of P . Hence, we deal with a vector of polynomials p : Rn×k → Rn and need to prove it can be expressed as p = ∑ |α|≤n bαqα, for Sn invariant polynomial qα.
Given a polynomial p(X) and the cyclic permutation σ−1 = (123 · · ·n) the following operation, taking a polynomial to a vector of polynomials, is useful in characterizing equivariant polynomial maps:
dpe(X) = p(X) p(σ ·X) p(σ2 ·X)
... p(σn−1 ·X)
(3)
Theorem 2 will be proved using the following two lemmas:
Lemma 1. Let p : Rn×k → Rn be an equivariant polynomial map. Then, there exists a polynomial p : Rn×k → R, invariant to Sn−1 (permuting the last n− 1 rows ofX) so that p = dpe.
Proof. Equivariance of p means that for all σ ∈ Sn it holds that σ · p(X) = p(σ ·X)
σ · p(X) = p(σ ·X). (4)
Choosing an arbitrary permutation σ ∈ stab(1) < Sn, namely a permutation satisfying σ(1) = 1, and observing the first row in Equation 4 we get p1(X) = p1(σ ·X) = p1(x1,xσ−1(2), . . . ,xσ−1(n)). Since this is true for all σ ∈ stab(1) p1 is Sn−1 invariant. Next, applying σ = (1i) to Equation 4 and observing the first row again we get pi(X) = p1(xi, . . . ,x1, . . .). Using the invariance of p1 to Sn−1 we get p = dp1e.
Lemma 2. Let p : Rn×k → R be a polynomial invariant to Sn−1 (permuting the last n− 1 rows of X) then
p(X) = ∑ |α|≤n xα1 qα(X), (5)
where qα are Sn invariant.
Proof. Expanding p with respect to x1 we get p(X) = ∑ |α|≤m xα1 pα(x2, . . . ,xn), (6)
for some m ∈ N. We first claim pα are Sn−1 invariant. Indeed, note that if p(X) = p(x1,x2, . . . ,xn) is Sn−1 invariant, i.e., invariant to permutations of x2, . . . ,xn, then also its derivatives ∂ |β|
∂xβ1 p(X) are Sn−1 permutation invariant, for all β ∈ Nk. Taking the derivative ∂|β|/∂xβ1 on
both sides of Equation 6 we get that pβ is Sn−1 equivariant.
For brevity denote p = pα. Since p is Sn−1 invariant it can be expressed as a polynomial in the powersum multi-symmetric polynomials, i.e., p(x2, . . . ,xn) = r(s1(x2, . . . ,xn), . . . , st(x2, . . . ,xn)). Note that si(x2, . . . ,xn) = si(X)− xαi1 and therefore
p(x2, . . . ,xn) = r(s1(X)− xα11 , . . . , st(X)− x αt 1 ).
Since r is a polynomial, expanding its monomials in si(X) and xα1 shows p can be expressed as p = ∑ |α|≤m′ x α 1 p̃α, where m
′ ∈ N, and p̃α are Sn invariant (as multiplication of invariant Sn polynomials si(X)). Plugging this in Equation 6 we get Equation 5, possibly with the sum over some n’ > n. It remains to show n′ can be taken to be at-most n. This is proved in Corollary 5 in Briand (2004)
Proof. (Theorem 2) Given an equivariant p as above, use Lemma 1 to write p = dpe where p(X) is invariant to permuting the last n− 1 rows ofX . Use Lemma 2 to write p(X) = ∑ |α|≤n x α 1 qα(X), where qα are Sn invariant. We get,
p = dpe = ∑ |α|≤n bαqα.
The converse direction is immediate after noting that bα are equivariant and qα are invariant.
4 UNIVERSALITY OF SET EQUIVARIANT NEURAL NETWORKS
We consider equivariant deep neural networks f : Rn×kin → Rn×kout ,
F (X) = Lm ◦ ν ◦ · · · ◦ ν ◦L1(X), (7)
where Li : Rn×ki → Rn×ki+1 are affine equivariant transformations, and ν is an entry-wise nonlinearity (e.g., ReLU). We define the width of the network to be ω = maxi ki; note that this definition is different from the one used for standard MLP where the width would be nω, see e.g., Hanin & Sellke (2017). Zaheer et al. (2017) proved that affine equivariant Li are of the form
Li(X) =XA+ 1
n 11TXB + 1cT , (8)
where A,B ∈ Rki×ki+1 , and c ∈ Rki+1 are the layer’s trainable parameters; we call the linear transformationX 7→ 1n11 TXB a linear transmission layer.
We now define the equivariant models considered in this paper: The DeepSets (Zaheer et al., 2017) architecture is Equation 7 with the choice of layers as in Equation 8. Taking B = 0 in all layers is the PointNet architecture (Qi et al., 2017). PointNetST is an equivariant model of the form Equation 7 with layers as in Equation 8 where only a single layer Li has a non-zero B. The PointNetSeg (Qi et al., 2017) architecture is PointNet composed with an invariant max layer, namely max(F (X))j = maxi∈[n] F (X)i,j and then concatenating it with the inputX , i.e., [X,1 max(F (X))], and feeding is as input to another PointNetG, that isG([X,1 max(F (X))]).
We will prove PointNetST is permutation equivariant universal and therefore arguably the simplest permutation equivariant universal model known to date.
Universality of equivariant deep networks is defined next.
Definition 2. Permutation equivariant universality1 of a model F : Rn×kin → Rn×kout means that for every permutation equivariant continuous function H : Rn×kin → Rn×kout defined over the cube K = [0, 1]n×kin ⊂ Rn×kin , and > 0 there exists a choice of m (i.e., network depth), ki (i.e., network width) and the trainable parameters of F so that ‖H(X)− F (X)‖∞ < for allX ∈ K.
Proof. (Theorem 1) Fact (i), namely that PointNet is not equivariant universal is a consequence of the following simple lemma:
Lemma 3. Let h = (h1, . . . , hn)T : Rn → Rn be the equivariant linear function defined by h(x) = 11Tx. There is no f : R→ R so that |hi(x)− f(xi)| < 12 for all i ∈ [n] and x ∈ [0, 1] n.
Proof. Assume such f exists. Let e1 = (1, 0, . . . , 0)T ∈ Rn. Then,
1 = |h2(e1)− h2(0)| ≤ |h2(e1)− f(0)|+ |f(0)− h2(0)| < 1
reaching a contradiction.
To prove (ii) we first reduce the problem from the class of all continuous equivariant functions to the class of equivariant polynomials. This is justified by the following lemma.
Lemma 4. Equivariant polynomials P : Rn×kin → Rn×kout are dense in the space of continuous equivariant functions F : Rn×kin → Rn×kout over the cube K.
Proof. Take an arbitrary > 0. Consider the function fij : Rn×kin → R, which denotes the (i, j)-th output entry of F . By the Stone-Weierstrass Theorem there exists a polynomial pij : Rn×kin → R such that ‖fij(X)− pij(X)‖∞ ≤ for allX ∈ K. Consider the polynomial map P : R
n×kin → Rn×kout defined by (P )ij = pij . P is in general not equivariant. To finish the proof we will symmetrize P :∥∥∥∥∥F (X)− 1n! ∑
σ∈Sn
σ · P (σ−1 ·X) ∥∥∥∥∥ ∞ = ∥∥∥∥∥ 1n! ∑ σ∈Sn σ · F (σ−1 ·X)− 1 n! ∑ σ∈Sn σ · P (σ−1 ·X) ∥∥∥∥∥ ∞
= ∥∥∥∥∥ 1n! ∑ σ∈Sn σ · ( F (σ−1 ·X)− P (σ−1 ·X) )∥∥∥∥∥ ∞ ≤ 1 n! ∑ σ∈Sn = ,
where in the first equality we used the fact that F is equivariant. This concludes the proof since∑ σ∈Sn σ · P (σ −1 ·X) is an equivariant polynomial map.
Now, according to Theorem 2 an arbitrary equivariant polynomial P : Rn×kin → Rn×kout can be written as P = ∑ |α|≤n bα(X)qα(X) T , where bα(X) = dxα1 e ∈ Rn and qα =
1Or just equivariant universal in short.
(qα,1, . . . , qα,kout) ∈ Rkout are invariant polynomials. Remember that every Sn invariant polynomial can be expressed as a polynomial in the t = ( n+kin kin ) power-sum multi-symmetric polynomials sj(X) = 1 n ∑n i=1 x αj i , j ∈ [t] (we use the 1/n normalized version for a bit more simplicity later on). We can therefore write P as composition of three maps:
P = Q ◦L ◦B, (9)
whereB : Rn×kin → Rn×t is defined by
B(X) = (b(x1), . . . , b(xn)) T ,
b(x) = (xα1 , . . . ,xαt); L is defined as in Equation 8 with B = [0, I] and A = [e1, . . . , ekin ,0], where I ∈ Rt×t the identity matrix and ei ∈ Rt represents the standard basis (as-usual). We assume αj = ej ∈ Rkin , for j ∈ [kin]. Note that the output of L is of the form
L(B(X)) = (X,1s1(X),1s2(X), . . . ,1st(X)).
Finally,Q : Rn×(kin+t) → Rn×kout is defined by
Q(X,1s1, . . . ,1st) = (q(x1, s1, . . . , st), . . . , q(xn, s1, . . . , st)) T , and q(x, s1, . . . , st) = ∑ |α|≤n x αqα(s1, . . . , st) T .
The decomposition in Equation 9 of P suggests that replacing Q,B with Multi-Layer Perceptrons (MLPs) would lead to a universal permutation equivariant network consisting of PointNet with a single linear transmission layer, namely PointNetST.
The F approximating P will be defined as
F = Ψ ◦L ◦Φ, (10)
where Φ : Rn×kin → Rn×t and Ψ : Rn×(t+kin) → Rn×kout are both of PointNet architecture, namely there exist MLPs φ : Rkin → Rt and ψ : Rt+kin →
Rkout so that Φ(X) = (φ(x1), . . . ,φ(xn))T and Ψ(X) = (ψ(x1), . . . ,ψ(xn))T . See Figure 1 for an illustration of F .
To build the MLPs φ ,ψ we will first construct ψ to approximate q, that is, we use the universality of MLPS (see (Hornik, 1991; Sonoda & Murata, 2017; Hanin & Sellke, 2017)) to construct ψ so that ‖ψ(x, s1, . . . , st)− q(x, s1, . . . , st)‖∞ < 2 for all (x, s1, . . . , st) ∈ [0, 1]
kin+t. Furthermore, as ψ over [0, 1]kin+t is uniformly continuous, let δ be such that if z, z′ ∈ [0, 1]kin+t, ‖z − z′‖∞ < δ then ‖ψ(z)−ψ(z′)‖∞ < 2 . Now, we use universality again to construct φ approximating b, that is we take φ so that ‖φ(x)− b(x)‖∞ < δ for all x ∈ [0, 1]kin .
‖F (X)− P (X)‖∞ ≤ ‖Ψ(L(Φ(X)))−Ψ(L(B(X)))‖∞ + ‖Ψ(L(B(X)))−Q(L(B(X)))‖∞ = err1 + err2
First, ‖L(Φ(X))−L(B(X))‖∞ < δ for allX ∈ K and therefore err1 < 2 . Second, note that if X ∈ K then B(X) ∈ [0, 1]n×t and L(B(X)) ∈ [0, 1]n×(kin+t). Therefore by construction of ψ we have err2 < 2 .
To prove (iii) we use the result in Hanin & Sellke (2017) (see Theorem 1) bounding the width of an MLP approximating a function f : [0, 1]din → Rdout by din + dout. Therefore, the width of the MLP φ is bounded by kin + t, where the width of the MLP ψ is bounded by t+ kin + kout, proving the bound.
We can now prove Cororllary 1.
Proof. (Corollary 1)
The fact that the DeepSets model is equivariant universal is immediate. Indeed, The PointNetST model can be obtained from the DeepSets model by settingB = 0 in all but one layer, withB as in Equation 8.
For the PointNetSeg model note that by Theorem 1 in Qi et al. (2017) every invariant function f : Rn×kin → Rt can be approximated by a network of the form Ψ(max(F (X))), where (F (X) is a PointNet model and Ψ is an MLP. In particular, for every ε > 0 there exists such F ,Φ for which ‖Ψ(max(F (X))) − (s1(X), . . . , st(X))‖∞ < ε for every X ∈ [0, 1]n×kin where s1(X), . . . , st(X) are the power-sum multi-symmetric polynomials. It follows that we can use PointNetSeg to approximate 1(s1(X), . . . , st(X)). The rest of the proof closely resembles the proof of Theorem 1.
Graph neural networks with constructed adjacency. One approach sometimes applied to learning from sets of vectors is to define an adjacency matrix (e.g., by thresholding distances of node feature vectors) and apply a graph neural network to the resulting graph (e.g., Wang et al. (2019), Li et al. (2019)). Using the common message passing paradigm (Gilmer et al., 2017) in this case boils to layers of the form: L(X)i = ψ(xi, ∑ j∈Ni φ(xi,xj)) = ψ(xi, ∑ j∈[n]N(xi,xj)φ(xi,xj)), where φ, ψ are MLPs, Ni is the index set of neighbors of node i, and N(xi,xj) is the indicator function for the edge (i, j). If N can be approximated by a continuous function, which is the case at least in the L2 sense for a finite set of vectors, then since L is also equivariant it follows from Theorem 1 that such a network can be approximated (again, at least in L2 norm) arbitrarily well by any universal equivariant network such as PointNetST or DeepSets.
We tested the ability of a DeepSets model with varying depth and width to approximate a single graph convolution layer. We found that a DeepSets model with a small number of layers can approximate a graph convolution layer reasonably well. For details see Appendix A.
5 EXPERIMENTS
We conducted experiments in order to validate our theoretical observations. We compared the results of several equivariant models, as well as baseline (full) MLP, on three equivariant learning tasks: a classification task (knapsack) and two regression tasks (squared norm and Fiedler eigen vector). For all tasks we compare results of 7 different models: DeepSets, PointNet, PointNetSeg, PointNetST, PointNetQT and GraphNet. PointNetQT is PointNet with a single quadratic equivariant transmission layer as defined in Appendix B. GraphNet is similar to the graph convolution network in Kipf & Welling (2016) and is defined explicitly in Appendix B. We generated the adjacency matrices for GraphNet by taking 10 nearest neighbors of each set element. In all experiments we used a network of the form Equation 7 with m = 6 depth and varying width, fixed across all layers. 2
Equivariant classification. For classification, we chose to learn the multidimensional knapsack problem, which is known to be NP-hard. We are given a set of 4-vectors, represented byX ∈ Rn×4, and our goal is to learn the equivariant classification function f : Rn×4 → {0, 1}n defined by the following optimization problem:
f(X) = argmax z n∑ i=1 xi1zi
s.t. n∑ i=1 xijzi ≤ wj , j = 2, 3, 4
zi ∈ {0, 1} , i ∈ [n]
Intuitively, given a set of vectorsX ∈ Rn×4, (X)ij = xij , where each row represents an element in a set, our goal is to find a subset maximizing the value while satisfying budget constraints. The first column ofX defines the value of each element, and the three other columns the costs.
To evaluate the success of a trained model we record the percentage of sets for which the predicted subset is such that all the budget constrains are satisfied and the value is within 10% of the optimal value. In Appendix C we detail how we generated this dataset.
Equivariant regression. The first equivariant function we considered for regression is the function f(X) = 1 ∑n i=1 ∑k j=1(Xi,j − 1 2 ) 2. Hanin & Sellke (2017) showed this function cannot be approxi-
2The code can be found at https://github.com/NimrodSegol/On-Universal-Equivariant-Set-Networks
Knapsack test Fiedler test ∑ x∈X(x− 1 2 ) 2 test
mated by MLPs of small width. We drew 10k training examples and 1k test examples i.i.d. from a N ( 12 , 1) distribution (per entry ofX).
The second equivariant function we considered is defined on point cloudsX ∈ Rn×3. For each point cloud we computed a graph by connecting every point to its 10 nearest neighbors. We then computed the absolute value of the first non trivial eigenvector of the graph Laplacian. We used the ModelNet dataset (Wu et al., 2015) which contains ∼ 9k training meshes and ∼ 2k test meshes. The point clouds are generated by randomly sampling 512 points from each mesh.
Result summary. Figure 2 summarizes train and test accuracy of the 6 models after training (training details in Appendix C) as a function of the network width ω. We have tested 15 ω values equidistant in [5, nkin2 ].
As can be seen in the graphs, in all three datasets the equivariant universal models (PointNetST, PointNetQT , DeepSets, PointNetSeg) achieved comparable accuracy. PointNet, which is not equivariant universal, consistently achieved inferior performance compared to the universal models, as expected by the theory developed in this paper. The non-equivariant MLP, although universal, used the same width (i.e., same number of parameters) as the equivariant models and was able to over-fit only on one train set (the quadratic function); its performance on the test sets was inferior by a large margin to the equivariant models. We also note that in general the GraphNet model achieved comparable results to the equivariant universal models but was still outperformed by the DeepSets model.
An interesting point is that although the width used in the experiments in much smaller than the bound kout + kin + ( n+kin kin ) established by Theorem 1, the universal models are still able to learn well the functions we tested on. This raises the question of the tightness of this bound, which we leave to future work.
6 CONCLUSIONS
In this paper we analyze several set equivariant neural networks and compare their approximation power. We show that while vanilla PointNet (Qi et al., 2017) is not equivariant universal, adding a single linear transmission layer makes it equivariant universal. Our proof strategy is based on a characterization of polynomial equivariant functions. As a corollary we show that the DeepSets model
(Zaheer et al., 2017) and PointNetSeg (Qi et al., 2017) are equivariant universal. Experimentally, we tested the different models on several classification and regression tasks finding that adding a single linear transmitting layer to PointNet makes a significant positive impact on performance.
7 ACKNOWLEDGEMENTS
This research was supported in part by the European Research Council (ERC Consolidator Grant, LiftMatch 771136) and the Israel Science Foundation (Grant No. 1830/17).
A APPROXIMATING GRAPH CONVOLUTION LAYER WITH DEEPSETS
To test the ability of an equivariant universal model to approximate a graph convolution layer, we conducted an experiment where we applied a single graph convolution layer (see Appendix B for a full description of the graph convolution layers used in this paper) with 3 in features and 10 out features. We constructed a knn graph by taking 10 neighbors. We sampled 1000 examples in R100×3 i.i.d from a N ( 12 , 1) distribution (per entry of X). The results are summarized in Figure 3. We regressed to the output of a graph convolution layer using the smooth L1 loss.
B DESCRIPTION OF LAYERS
B.1 QUADRATIC LAYER
One potential application of Theorem 2 is augmenting an equivariant neural network (Equation 7) with equivariant polynomial layers P : Rn×k → Rn×l of some maximal degree d. This can be done in the following way: look for all solutions to α, β1, β2, . . . ∈ Nk so that |α| + ∑ i |βi| ≤ d. Any
solution to this equation will give a basis element of the form p(X) = dxα1 e ∏ j (∑n i=1 x βj i ) .
In the paper we tested PointNetQT, an architecture that adds to PointNet a single quadratic equivariant layer. We opted to use only the quadratic transmission operators: For a matrixX ∈ Rn×k we define L(X) ∈ Rn×k as follows:
L(X) =XW1 + 11 TXW2 + (11 TX) (11TX)W3 + (X X)W4 + (11TX) XW5,
where is a point-wise multiplication andWi ∈ Rn×k, i ∈ [5] are the learnable parameters.
B.2 GRAPH CONVOLUTION LAYER
We implement a graph convolution layers as follows
L(X) = BXW2 +XW1 + 1c T
with W1,W2, c learnable parameters. The matrix B is defined as in Kipf & Welling (2016) from the knn graph of the set. B =D− 1 2AD− 1 2 whereD is the degree matrix of the graph andA is the adjacency matrix of the graph with added self-connections.
C IMPLEMENTATION DETAILS
Knapsack data generation. We constructed a dataset of 10k training examples and 1k test examples consisting of 50× 4 matrices. We took w1 = 100, w2 = 80, w3 = 50. To generateX ∈ R50×4, we draw an integer uniformly at random between 1 and 100 and randomly choose 50 integers between 1 as the first column ofX . We also randomly chose an integer between 1 and 25 and then randomly chose 150 integers in that range as the three last columns ofX . The labels for each inputX were computed by a standard dynamic programming approach, see Martello & Toth (1990).
Optimization. We implemented the experiments in Pytorch Paszke et al. (2017) with the Adam Kingma & Ba (2014) optimizer for learning. For the classification we used the cross entropy loss and trained for 150 epochs with learning rate 0.001, learning rate decay of 0.5 every 100 epochs and batch size 32. For the quadratic function regression we trained for 150 epochs with leaning rate of 0.001, learning rate decay 0.1 every 50 epochs and batch size 64; for the regression to the leading eigen vector we trained for 50 epochs with leaning rate of 0.001 and batch size 32. To regress to the output of a single graph convolution layer we trained for 200 epochs with leaning rate of 0.001 and batch size 32. | 1. What are the main contributions and findings of the paper regarding deep learning models and their capabilities?
2. How does the reviewer assess the presentation and clarity of the paper's content?
3. Are there any suggestions for improving the accessibility and presentation of the paper's results?
4. What are the issues raised by the reviewer regarding the discussion on non-universality of vanilla PointNet?
5. Can you provide further explanations or references regarding the single-channel version of Theorem 2? | Review | Review
The paper presents proof that the DeepSets and a variant of PointNet are universal approximators for permutation equivariant functions. The proof uses an expression for equivariant polynomials and the universality of MLP. It then shows that the proposed expression in terms of power-sum polynomials can be constructed in PointNet using a minimal modification to the architecture, or using DeepSets, therefore proving the universality of such deep models.
The results of this paper are important. In terms of presentation, the notation and statement of theorems are precise, however, the presentation is rather dry, and I think the paper can be significantly more accessible. For example, here is an alternative and clearer route presenting the same result: one may study the simple case of having single input channel, for which the output at index "i" of an equivariant polynomial is written as the sum of all powers of input multiplied by a polynomial function of the corresponding power-sum. This second part is indeed what is used in the proof of the universality of the permutation invariant version of DeepSets, making the connection more visible. Generalizing this to the multi-channel input as the next step could make the proof more accessible.
The second issue I would like to raise is related to discussions around the non-universality of the vanilla PointNet model. Given the fact that it applies the same MLP independently to individual set members, it is obvious that it is not universal equivariant (for example, consider a function that performs a fixed permutation to its input), and I fail to see why the paper goes into the trouble of having theorems and experiments just to demonstrate this point. If there were any other objectives beyond this in the experiments could you please clarify?
Finally, could you give a more accurate citation (chapter-page number) for the single-channel version of Theorem 2.? |
ICLR | Title
On Universal Equivariant Set Networks
Abstract
Using deep neural networks that are either invariant or equivariant to permutations in order to learn functions on unordered sets has become prevalent. The most popular, basic models are DeepSets (Zaheer et al., 2017) and PointNet (Qi et al., 2017). While known to be universal for approximating invariant functions, DeepSets and PointNet are not known to be universal when approximating equivariant set functions. On the other hand, several recent equivariant set architectures have been proven equivariant universal (Sannai et al., 2019; Keriven & Peyré, 2019), however these models either use layers that are not permutation equivariant (in the standard sense) and/or use higher order tensor variables which are less practical. There is, therefore, a gap in understanding the universality of popular equivariant set models versus theoretical ones. In this paper we close this gap by proving that: (i) PointNet is not equivariant universal; and (ii) adding a single linear transmission layer makes PointNet universal. We call this architecture PointNetST and argue it is the simplest permutation equivariant universal model known to date. Another consequence is that DeepSets is universal, and also PointNetSeg, a popular point cloud segmentation network (used e.g., in (Qi et al., 2017)) is universal. The key theoretical tool used to prove the above results is an explicit characterization of all permutation equivariant polynomial layers. Lastly, we provide numerical experiments validating the theoretical results and comparing different permutation equivariant models.
1 INTRODUCTION
Many interesting tasks in machine learning can be described by functions F that take as input a set, X = (x1, . . . ,xn), and output some per-element features or values, F (X) = (F (X)1, . . . ,F (X)n). Permutation equivariance is the property required of F so it is welldefined. Namely, it assures that reshuffling the elements in X and applying F results in the same output, reshuffled in the same manner. For example, if X̃ = (x2,x1,x3, . . . ,xn) then F (X̃) = (F (X)2,F (X)1,F (X)3, . . . ,F (X)n).
Building neural networks that are permutation equivariant by construction proved extremely useful in practice. Arguably the most popular models are DeepSets Zaheer et al. (2017) and PointNet Qi et al. (2017). These models enjoy small number of parameters, low memory footprint and computational efficiency along with high empirical expressiveness. Although both DeepSets and PointNet are known to be invariant universal (i.e., can approximate arbitrary invariant continuous functions) they are not known to be equivariant universal (i.e., can approximate arbitrary equivariant continuous functions).
On the other hand, several researchers have suggested theoretical permutation equivariant models and proved they are equivariant universal. Sannai et al. (2019) builds a universal equivariant network by taking n copies of (n− 1)-invariant networks and combines them with a layer that is not permutation invariant in the standard (above mentioned) sense. Keriven & Peyré (2019) solves a more general problem of building networks that are equivariant universal over arbitrary high order input tensors Rnd (including graphs); their construction, however, uses higher order tensors as hidden variables
which is of less practical value. Yarotsky (2018) proves that neural networks constructed using a finite set of invariant and equivariant polynomial layers are also equivariant universal, however his network is not explicit (i.e., the polynomials are not characterized for the equivariant case) and also of less practical interest due to the high degree polynomial layers.
In this paper we close the gap between the practical and theoretical permutation equivariant constructions and prove: Theorem 1.
(i) PointNet is not equivariant universal.
(ii) Adding a single linear transmission layer (i.e.,X 7→ 11TX) to PointNet makes it equivariant universal.
(iii) Using ReLU activation the minimal width required for universal permutation equivariant network satisfies ω ≤ kout + kin + ( n+kin kin ) .
This theorem suggests that, arguably, PointNet with an addition of a single linear layer is the simplest universal equivariant network, able to learn arbitrary continuous equivariant functions of sets. An immediate corollary of this theorem is Corollary 1. DeepSets and PointNetSeg are universal.
PointNetSeg is a network used often for point cloud segmentation (e.g., in Qi et al. (2017)). One of the benefit of our result is that it provides a simple characterization of universal equivariant architectures that can be used in the network design process to guarantee universality.
The theoretical tool used for the proof of Theorem 1 is an explicit characterization of the permutation equivariant polynomials over sets of vectors in Rk using power-sum multi-symmetric polynomials. We prove: Theorem 2. Let P : Rn×k → Rn×l be a permutation equivariant polynomial map. Then,
P (X) = ∑ |α|≤n bαq T α , (1)
where bα = (xα1 , . . . ,x α n) T , qα = (qα,1, . . . , qα,l)T , where qα,j = qα,j(s1, . . . , st), t = ( n+k k ) , are
polynomials; sj(X) = ∑n i=1 x αj i are the power-sum multi-symmetric polynomials. On the other hand every polynomial map P satisfying Equation 1 is equivariant.
This theorem, which extends Proposition 2.27 in Golubitsky & Stewart (2002) to sets of vectors using multivariate polynomials, lends itself to expressing arbitrary equivariant polynomials as a composition of entry-wise continuous functions and a single linear transmission, which in turn facilitates the proof of Theorem 1.
We conclude the paper by numerical experiments validating the theoretical results and testing several permutation equivariant networks for the tasks of set classification and regression.
2 PRELIMINARIES
Equivariant maps. Vectors x ∈ Rk are by default column vectors; 0,1 are the all zero and all one vectors/tensors; ei is the i-th standard basis vector; I is the identity matrix; all dimensions are inferred from context or mentioned explicitly. We represent a set of n vectors in Rk as a matrixX ∈ Rn×k and denoteX = (x1,x2, . . . ,xn)T , where xi ∈ Rk, i ∈ [n], are the columns ofX . We denote by Sn the permutation group of [n]; its action onX is defined by σ ·X = (xσ−1(1),xσ−1(2), . . . ,xσ−1(n))T , σ ∈ Sn. That is, σ is reshuffling the rows of X . The natural class of maps assigning a value or feature vector to every element in an input set is permutation equivariant maps: Definition 1. A map F : Rn×k → Rn×l satisfying F (σ ·X) = σ · F (X) for all σ ∈ Sn and X ∈ Rn×d is called permutation equivariant.
Power-sum multi-symmetric polynomials. Given a vector z = (z1, . . . , zn) ∈ Rn the powersum symmetric polynomials sj(z) = ∑n i=1 z j i , with j ∈ [n], uniquely characterize z up to permuting
its entries. In other words, for z,y ∈ Rn we have y = σ · z for some σ ∈ Sn if and only if sj(y) = sj(z) for all j ∈ [n]. An equivalent property is that every Sn invariant polynomial p can be expressed as a polynomial in the power-sum symmetric polynomials, i.e., p(z) = q(s1(z), . . . , sn(z)), see Rydh (2007) Corollary 8.4, Briand (2004) Theorem 3. This fact was previously used in Zaheer et al. (2017) to prove that DeepSets is universal for invariant functions. We extend this result to equivariant functions and the multi-feature (sets of vectors) case.
For a vector x ∈ Rk and a multi-index vector α = (α1, . . . , αk) ∈ Nk we define xα = xα11 · · ·x αk k , and |α| = ∑ i∈[k] αi. A generalization of the power-sum symmetric polynomials to matrices exists and is called power-sum multi-symmetric polynomials, defined with a bit of notation abuse: sα(X) = ∑n i=1 x α i , where α ∈ Nk is a multi-index satisfying |α| ≤ n. Note that the number of
power-sum multi-symmetric polynomials acting onX ∈ Rn×k is t = ( n+k k ) . For notation simplicity let α1, . . . , αt be a list of all α ∈ Nk with |α| ≤ n. Then we index the collection of power-sum multi-symmetric polynomials as s1, . . . , st.
Similarly to the vector case the numbers sj(X), j ∈ [t] characterize X up to permutation of its rows. That is Y = σ · X for some σ ∈ Sn iff sj(Y ) = sj(Y ) for all j ∈ [t]. Furthermore, every Sn invariant polynomial p : Rn×k → R can be expressed as a polynomial in the power-sum multi-symmetric polynomials (see (Rydh, 2007) corollary 8.4), i.e.,
p(X) = q(s1(X), . . . , st(X)), (2)
These polynomials were recently used to encode multi-sets in Maron et al. (2019).
3 EQUIVARIANT MULTI-SYMMETRIC POLYNOMIAL LAYERS
In this section we develop the main theoretical tool of this paper, namely, a characterization of all permutation equivariant polynomial layers. As far as we know, these layers were not fully characterized before.
Theorem 2 provides an explicit representation of arbitrary permutation equivariant polynomial maps P : Rn×k → Rn×l using the basis of power-sum multi-symmetric polynomials, si(X). The particular use of power-sum polynomials si(X) has the advantage it can be encoded efficiently using a neural network: as we will show si(X) can be approximated using a PointNet with a single linear transmission layer. This allows approximating an arbitrary equivariant polynomial map using PointNet with a single linear transmission layer.
A version of this theorem for vectors instead of matrices (i.e., the case of k = 1) appears as Proposition 2.27 in Golubitsky & Stewart (2002); we extend their proof to matrices, which is the relevant scenario for ML applications as it allows working with sets of vectors. For k = 1 Theorem 2 reduces to the following form: p(x)i = ∑ a≤n pa(s1(x), . . . , sn(x))x a i with sj(x) = ∑ i x j i . For matrices the monomial xki is replaced by x α i for a multi-index α and the power-sum symmetric polynomials are replaced by the power-sum multi-symmetric polynomials.
First, note that it is enough to prove Theorem 1 for l = 1 and apply it to every column of P . Hence, we deal with a vector of polynomials p : Rn×k → Rn and need to prove it can be expressed as p = ∑ |α|≤n bαqα, for Sn invariant polynomial qα.
Given a polynomial p(X) and the cyclic permutation σ−1 = (123 · · ·n) the following operation, taking a polynomial to a vector of polynomials, is useful in characterizing equivariant polynomial maps:
dpe(X) = p(X) p(σ ·X) p(σ2 ·X)
... p(σn−1 ·X)
(3)
Theorem 2 will be proved using the following two lemmas:
Lemma 1. Let p : Rn×k → Rn be an equivariant polynomial map. Then, there exists a polynomial p : Rn×k → R, invariant to Sn−1 (permuting the last n− 1 rows ofX) so that p = dpe.
Proof. Equivariance of p means that for all σ ∈ Sn it holds that σ · p(X) = p(σ ·X)
σ · p(X) = p(σ ·X). (4)
Choosing an arbitrary permutation σ ∈ stab(1) < Sn, namely a permutation satisfying σ(1) = 1, and observing the first row in Equation 4 we get p1(X) = p1(σ ·X) = p1(x1,xσ−1(2), . . . ,xσ−1(n)). Since this is true for all σ ∈ stab(1) p1 is Sn−1 invariant. Next, applying σ = (1i) to Equation 4 and observing the first row again we get pi(X) = p1(xi, . . . ,x1, . . .). Using the invariance of p1 to Sn−1 we get p = dp1e.
Lemma 2. Let p : Rn×k → R be a polynomial invariant to Sn−1 (permuting the last n− 1 rows of X) then
p(X) = ∑ |α|≤n xα1 qα(X), (5)
where qα are Sn invariant.
Proof. Expanding p with respect to x1 we get p(X) = ∑ |α|≤m xα1 pα(x2, . . . ,xn), (6)
for some m ∈ N. We first claim pα are Sn−1 invariant. Indeed, note that if p(X) = p(x1,x2, . . . ,xn) is Sn−1 invariant, i.e., invariant to permutations of x2, . . . ,xn, then also its derivatives ∂ |β|
∂xβ1 p(X) are Sn−1 permutation invariant, for all β ∈ Nk. Taking the derivative ∂|β|/∂xβ1 on
both sides of Equation 6 we get that pβ is Sn−1 equivariant.
For brevity denote p = pα. Since p is Sn−1 invariant it can be expressed as a polynomial in the powersum multi-symmetric polynomials, i.e., p(x2, . . . ,xn) = r(s1(x2, . . . ,xn), . . . , st(x2, . . . ,xn)). Note that si(x2, . . . ,xn) = si(X)− xαi1 and therefore
p(x2, . . . ,xn) = r(s1(X)− xα11 , . . . , st(X)− x αt 1 ).
Since r is a polynomial, expanding its monomials in si(X) and xα1 shows p can be expressed as p = ∑ |α|≤m′ x α 1 p̃α, where m
′ ∈ N, and p̃α are Sn invariant (as multiplication of invariant Sn polynomials si(X)). Plugging this in Equation 6 we get Equation 5, possibly with the sum over some n’ > n. It remains to show n′ can be taken to be at-most n. This is proved in Corollary 5 in Briand (2004)
Proof. (Theorem 2) Given an equivariant p as above, use Lemma 1 to write p = dpe where p(X) is invariant to permuting the last n− 1 rows ofX . Use Lemma 2 to write p(X) = ∑ |α|≤n x α 1 qα(X), where qα are Sn invariant. We get,
p = dpe = ∑ |α|≤n bαqα.
The converse direction is immediate after noting that bα are equivariant and qα are invariant.
4 UNIVERSALITY OF SET EQUIVARIANT NEURAL NETWORKS
We consider equivariant deep neural networks f : Rn×kin → Rn×kout ,
F (X) = Lm ◦ ν ◦ · · · ◦ ν ◦L1(X), (7)
where Li : Rn×ki → Rn×ki+1 are affine equivariant transformations, and ν is an entry-wise nonlinearity (e.g., ReLU). We define the width of the network to be ω = maxi ki; note that this definition is different from the one used for standard MLP where the width would be nω, see e.g., Hanin & Sellke (2017). Zaheer et al. (2017) proved that affine equivariant Li are of the form
Li(X) =XA+ 1
n 11TXB + 1cT , (8)
where A,B ∈ Rki×ki+1 , and c ∈ Rki+1 are the layer’s trainable parameters; we call the linear transformationX 7→ 1n11 TXB a linear transmission layer.
We now define the equivariant models considered in this paper: The DeepSets (Zaheer et al., 2017) architecture is Equation 7 with the choice of layers as in Equation 8. Taking B = 0 in all layers is the PointNet architecture (Qi et al., 2017). PointNetST is an equivariant model of the form Equation 7 with layers as in Equation 8 where only a single layer Li has a non-zero B. The PointNetSeg (Qi et al., 2017) architecture is PointNet composed with an invariant max layer, namely max(F (X))j = maxi∈[n] F (X)i,j and then concatenating it with the inputX , i.e., [X,1 max(F (X))], and feeding is as input to another PointNetG, that isG([X,1 max(F (X))]).
We will prove PointNetST is permutation equivariant universal and therefore arguably the simplest permutation equivariant universal model known to date.
Universality of equivariant deep networks is defined next.
Definition 2. Permutation equivariant universality1 of a model F : Rn×kin → Rn×kout means that for every permutation equivariant continuous function H : Rn×kin → Rn×kout defined over the cube K = [0, 1]n×kin ⊂ Rn×kin , and > 0 there exists a choice of m (i.e., network depth), ki (i.e., network width) and the trainable parameters of F so that ‖H(X)− F (X)‖∞ < for allX ∈ K.
Proof. (Theorem 1) Fact (i), namely that PointNet is not equivariant universal is a consequence of the following simple lemma:
Lemma 3. Let h = (h1, . . . , hn)T : Rn → Rn be the equivariant linear function defined by h(x) = 11Tx. There is no f : R→ R so that |hi(x)− f(xi)| < 12 for all i ∈ [n] and x ∈ [0, 1] n.
Proof. Assume such f exists. Let e1 = (1, 0, . . . , 0)T ∈ Rn. Then,
1 = |h2(e1)− h2(0)| ≤ |h2(e1)− f(0)|+ |f(0)− h2(0)| < 1
reaching a contradiction.
To prove (ii) we first reduce the problem from the class of all continuous equivariant functions to the class of equivariant polynomials. This is justified by the following lemma.
Lemma 4. Equivariant polynomials P : Rn×kin → Rn×kout are dense in the space of continuous equivariant functions F : Rn×kin → Rn×kout over the cube K.
Proof. Take an arbitrary > 0. Consider the function fij : Rn×kin → R, which denotes the (i, j)-th output entry of F . By the Stone-Weierstrass Theorem there exists a polynomial pij : Rn×kin → R such that ‖fij(X)− pij(X)‖∞ ≤ for allX ∈ K. Consider the polynomial map P : R
n×kin → Rn×kout defined by (P )ij = pij . P is in general not equivariant. To finish the proof we will symmetrize P :∥∥∥∥∥F (X)− 1n! ∑
σ∈Sn
σ · P (σ−1 ·X) ∥∥∥∥∥ ∞ = ∥∥∥∥∥ 1n! ∑ σ∈Sn σ · F (σ−1 ·X)− 1 n! ∑ σ∈Sn σ · P (σ−1 ·X) ∥∥∥∥∥ ∞
= ∥∥∥∥∥ 1n! ∑ σ∈Sn σ · ( F (σ−1 ·X)− P (σ−1 ·X) )∥∥∥∥∥ ∞ ≤ 1 n! ∑ σ∈Sn = ,
where in the first equality we used the fact that F is equivariant. This concludes the proof since∑ σ∈Sn σ · P (σ −1 ·X) is an equivariant polynomial map.
Now, according to Theorem 2 an arbitrary equivariant polynomial P : Rn×kin → Rn×kout can be written as P = ∑ |α|≤n bα(X)qα(X) T , where bα(X) = dxα1 e ∈ Rn and qα =
1Or just equivariant universal in short.
(qα,1, . . . , qα,kout) ∈ Rkout are invariant polynomials. Remember that every Sn invariant polynomial can be expressed as a polynomial in the t = ( n+kin kin ) power-sum multi-symmetric polynomials sj(X) = 1 n ∑n i=1 x αj i , j ∈ [t] (we use the 1/n normalized version for a bit more simplicity later on). We can therefore write P as composition of three maps:
P = Q ◦L ◦B, (9)
whereB : Rn×kin → Rn×t is defined by
B(X) = (b(x1), . . . , b(xn)) T ,
b(x) = (xα1 , . . . ,xαt); L is defined as in Equation 8 with B = [0, I] and A = [e1, . . . , ekin ,0], where I ∈ Rt×t the identity matrix and ei ∈ Rt represents the standard basis (as-usual). We assume αj = ej ∈ Rkin , for j ∈ [kin]. Note that the output of L is of the form
L(B(X)) = (X,1s1(X),1s2(X), . . . ,1st(X)).
Finally,Q : Rn×(kin+t) → Rn×kout is defined by
Q(X,1s1, . . . ,1st) = (q(x1, s1, . . . , st), . . . , q(xn, s1, . . . , st)) T , and q(x, s1, . . . , st) = ∑ |α|≤n x αqα(s1, . . . , st) T .
The decomposition in Equation 9 of P suggests that replacing Q,B with Multi-Layer Perceptrons (MLPs) would lead to a universal permutation equivariant network consisting of PointNet with a single linear transmission layer, namely PointNetST.
The F approximating P will be defined as
F = Ψ ◦L ◦Φ, (10)
where Φ : Rn×kin → Rn×t and Ψ : Rn×(t+kin) → Rn×kout are both of PointNet architecture, namely there exist MLPs φ : Rkin → Rt and ψ : Rt+kin →
Rkout so that Φ(X) = (φ(x1), . . . ,φ(xn))T and Ψ(X) = (ψ(x1), . . . ,ψ(xn))T . See Figure 1 for an illustration of F .
To build the MLPs φ ,ψ we will first construct ψ to approximate q, that is, we use the universality of MLPS (see (Hornik, 1991; Sonoda & Murata, 2017; Hanin & Sellke, 2017)) to construct ψ so that ‖ψ(x, s1, . . . , st)− q(x, s1, . . . , st)‖∞ < 2 for all (x, s1, . . . , st) ∈ [0, 1]
kin+t. Furthermore, as ψ over [0, 1]kin+t is uniformly continuous, let δ be such that if z, z′ ∈ [0, 1]kin+t, ‖z − z′‖∞ < δ then ‖ψ(z)−ψ(z′)‖∞ < 2 . Now, we use universality again to construct φ approximating b, that is we take φ so that ‖φ(x)− b(x)‖∞ < δ for all x ∈ [0, 1]kin .
‖F (X)− P (X)‖∞ ≤ ‖Ψ(L(Φ(X)))−Ψ(L(B(X)))‖∞ + ‖Ψ(L(B(X)))−Q(L(B(X)))‖∞ = err1 + err2
First, ‖L(Φ(X))−L(B(X))‖∞ < δ for allX ∈ K and therefore err1 < 2 . Second, note that if X ∈ K then B(X) ∈ [0, 1]n×t and L(B(X)) ∈ [0, 1]n×(kin+t). Therefore by construction of ψ we have err2 < 2 .
To prove (iii) we use the result in Hanin & Sellke (2017) (see Theorem 1) bounding the width of an MLP approximating a function f : [0, 1]din → Rdout by din + dout. Therefore, the width of the MLP φ is bounded by kin + t, where the width of the MLP ψ is bounded by t+ kin + kout, proving the bound.
We can now prove Cororllary 1.
Proof. (Corollary 1)
The fact that the DeepSets model is equivariant universal is immediate. Indeed, The PointNetST model can be obtained from the DeepSets model by settingB = 0 in all but one layer, withB as in Equation 8.
For the PointNetSeg model note that by Theorem 1 in Qi et al. (2017) every invariant function f : Rn×kin → Rt can be approximated by a network of the form Ψ(max(F (X))), where (F (X) is a PointNet model and Ψ is an MLP. In particular, for every ε > 0 there exists such F ,Φ for which ‖Ψ(max(F (X))) − (s1(X), . . . , st(X))‖∞ < ε for every X ∈ [0, 1]n×kin where s1(X), . . . , st(X) are the power-sum multi-symmetric polynomials. It follows that we can use PointNetSeg to approximate 1(s1(X), . . . , st(X)). The rest of the proof closely resembles the proof of Theorem 1.
Graph neural networks with constructed adjacency. One approach sometimes applied to learning from sets of vectors is to define an adjacency matrix (e.g., by thresholding distances of node feature vectors) and apply a graph neural network to the resulting graph (e.g., Wang et al. (2019), Li et al. (2019)). Using the common message passing paradigm (Gilmer et al., 2017) in this case boils to layers of the form: L(X)i = ψ(xi, ∑ j∈Ni φ(xi,xj)) = ψ(xi, ∑ j∈[n]N(xi,xj)φ(xi,xj)), where φ, ψ are MLPs, Ni is the index set of neighbors of node i, and N(xi,xj) is the indicator function for the edge (i, j). If N can be approximated by a continuous function, which is the case at least in the L2 sense for a finite set of vectors, then since L is also equivariant it follows from Theorem 1 that such a network can be approximated (again, at least in L2 norm) arbitrarily well by any universal equivariant network such as PointNetST or DeepSets.
We tested the ability of a DeepSets model with varying depth and width to approximate a single graph convolution layer. We found that a DeepSets model with a small number of layers can approximate a graph convolution layer reasonably well. For details see Appendix A.
5 EXPERIMENTS
We conducted experiments in order to validate our theoretical observations. We compared the results of several equivariant models, as well as baseline (full) MLP, on three equivariant learning tasks: a classification task (knapsack) and two regression tasks (squared norm and Fiedler eigen vector). For all tasks we compare results of 7 different models: DeepSets, PointNet, PointNetSeg, PointNetST, PointNetQT and GraphNet. PointNetQT is PointNet with a single quadratic equivariant transmission layer as defined in Appendix B. GraphNet is similar to the graph convolution network in Kipf & Welling (2016) and is defined explicitly in Appendix B. We generated the adjacency matrices for GraphNet by taking 10 nearest neighbors of each set element. In all experiments we used a network of the form Equation 7 with m = 6 depth and varying width, fixed across all layers. 2
Equivariant classification. For classification, we chose to learn the multidimensional knapsack problem, which is known to be NP-hard. We are given a set of 4-vectors, represented byX ∈ Rn×4, and our goal is to learn the equivariant classification function f : Rn×4 → {0, 1}n defined by the following optimization problem:
f(X) = argmax z n∑ i=1 xi1zi
s.t. n∑ i=1 xijzi ≤ wj , j = 2, 3, 4
zi ∈ {0, 1} , i ∈ [n]
Intuitively, given a set of vectorsX ∈ Rn×4, (X)ij = xij , where each row represents an element in a set, our goal is to find a subset maximizing the value while satisfying budget constraints. The first column ofX defines the value of each element, and the three other columns the costs.
To evaluate the success of a trained model we record the percentage of sets for which the predicted subset is such that all the budget constrains are satisfied and the value is within 10% of the optimal value. In Appendix C we detail how we generated this dataset.
Equivariant regression. The first equivariant function we considered for regression is the function f(X) = 1 ∑n i=1 ∑k j=1(Xi,j − 1 2 ) 2. Hanin & Sellke (2017) showed this function cannot be approxi-
2The code can be found at https://github.com/NimrodSegol/On-Universal-Equivariant-Set-Networks
Knapsack test Fiedler test ∑ x∈X(x− 1 2 ) 2 test
mated by MLPs of small width. We drew 10k training examples and 1k test examples i.i.d. from a N ( 12 , 1) distribution (per entry ofX).
The second equivariant function we considered is defined on point cloudsX ∈ Rn×3. For each point cloud we computed a graph by connecting every point to its 10 nearest neighbors. We then computed the absolute value of the first non trivial eigenvector of the graph Laplacian. We used the ModelNet dataset (Wu et al., 2015) which contains ∼ 9k training meshes and ∼ 2k test meshes. The point clouds are generated by randomly sampling 512 points from each mesh.
Result summary. Figure 2 summarizes train and test accuracy of the 6 models after training (training details in Appendix C) as a function of the network width ω. We have tested 15 ω values equidistant in [5, nkin2 ].
As can be seen in the graphs, in all three datasets the equivariant universal models (PointNetST, PointNetQT , DeepSets, PointNetSeg) achieved comparable accuracy. PointNet, which is not equivariant universal, consistently achieved inferior performance compared to the universal models, as expected by the theory developed in this paper. The non-equivariant MLP, although universal, used the same width (i.e., same number of parameters) as the equivariant models and was able to over-fit only on one train set (the quadratic function); its performance on the test sets was inferior by a large margin to the equivariant models. We also note that in general the GraphNet model achieved comparable results to the equivariant universal models but was still outperformed by the DeepSets model.
An interesting point is that although the width used in the experiments in much smaller than the bound kout + kin + ( n+kin kin ) established by Theorem 1, the universal models are still able to learn well the functions we tested on. This raises the question of the tightness of this bound, which we leave to future work.
6 CONCLUSIONS
In this paper we analyze several set equivariant neural networks and compare their approximation power. We show that while vanilla PointNet (Qi et al., 2017) is not equivariant universal, adding a single linear transmission layer makes it equivariant universal. Our proof strategy is based on a characterization of polynomial equivariant functions. As a corollary we show that the DeepSets model
(Zaheer et al., 2017) and PointNetSeg (Qi et al., 2017) are equivariant universal. Experimentally, we tested the different models on several classification and regression tasks finding that adding a single linear transmitting layer to PointNet makes a significant positive impact on performance.
7 ACKNOWLEDGEMENTS
This research was supported in part by the European Research Council (ERC Consolidator Grant, LiftMatch 771136) and the Israel Science Foundation (Grant No. 1830/17).
A APPROXIMATING GRAPH CONVOLUTION LAYER WITH DEEPSETS
To test the ability of an equivariant universal model to approximate a graph convolution layer, we conducted an experiment where we applied a single graph convolution layer (see Appendix B for a full description of the graph convolution layers used in this paper) with 3 in features and 10 out features. We constructed a knn graph by taking 10 neighbors. We sampled 1000 examples in R100×3 i.i.d from a N ( 12 , 1) distribution (per entry of X). The results are summarized in Figure 3. We regressed to the output of a graph convolution layer using the smooth L1 loss.
B DESCRIPTION OF LAYERS
B.1 QUADRATIC LAYER
One potential application of Theorem 2 is augmenting an equivariant neural network (Equation 7) with equivariant polynomial layers P : Rn×k → Rn×l of some maximal degree d. This can be done in the following way: look for all solutions to α, β1, β2, . . . ∈ Nk so that |α| + ∑ i |βi| ≤ d. Any
solution to this equation will give a basis element of the form p(X) = dxα1 e ∏ j (∑n i=1 x βj i ) .
In the paper we tested PointNetQT, an architecture that adds to PointNet a single quadratic equivariant layer. We opted to use only the quadratic transmission operators: For a matrixX ∈ Rn×k we define L(X) ∈ Rn×k as follows:
L(X) =XW1 + 11 TXW2 + (11 TX) (11TX)W3 + (X X)W4 + (11TX) XW5,
where is a point-wise multiplication andWi ∈ Rn×k, i ∈ [5] are the learnable parameters.
B.2 GRAPH CONVOLUTION LAYER
We implement a graph convolution layers as follows
L(X) = BXW2 +XW1 + 1c T
with W1,W2, c learnable parameters. The matrix B is defined as in Kipf & Welling (2016) from the knn graph of the set. B =D− 1 2AD− 1 2 whereD is the degree matrix of the graph andA is the adjacency matrix of the graph with added self-connections.
C IMPLEMENTATION DETAILS
Knapsack data generation. We constructed a dataset of 10k training examples and 1k test examples consisting of 50× 4 matrices. We took w1 = 100, w2 = 80, w3 = 50. To generateX ∈ R50×4, we draw an integer uniformly at random between 1 and 100 and randomly choose 50 integers between 1 as the first column ofX . We also randomly chose an integer between 1 and 25 and then randomly chose 150 integers in that range as the three last columns ofX . The labels for each inputX were computed by a standard dynamic programming approach, see Martello & Toth (1990).
Optimization. We implemented the experiments in Pytorch Paszke et al. (2017) with the Adam Kingma & Ba (2014) optimizer for learning. For the classification we used the cross entropy loss and trained for 150 epochs with learning rate 0.001, learning rate decay of 0.5 every 100 epochs and batch size 32. For the quadratic function regression we trained for 150 epochs with leaning rate of 0.001, learning rate decay 0.1 every 50 epochs and batch size 64; for the regression to the leading eigen vector we trained for 50 epochs with leaning rate of 0.001 and batch size 32. To regress to the output of a single graph convolution layer we trained for 200 epochs with leaning rate of 0.001 and batch size 32. | 1. What is the main contribution of the paper regarding deep set networks?
2. What are the limitations of the proposed approach in terms of function family approximation?
3. How does the reviewer assess the significance of the proposed permutation equivariant function?
4. What are some examples of functions that the authors' proposed function can approximate?
5. How does the reviewer suggest improving the experimental metrics and plots?
6. What is the reviewer's opinion on the name PointNetST and its relation to DeepSet and PointNet?
7. Is there any confusion regarding the paper's notation, such as the use of different dimensions for the vector x? | Review | Review
TLDR: The function these deep set networks can approximate is too limited to call these networks universal equivariant set networks. Authors should scope the paper to the specific function family these networks can approximate. No baseline comparison with GraphNets.
The paper proposes theoretical analysis on a set of networks that process features independently through MLPs + global aggregation operations. However, the function of interest is limited to a small family of affine equivariant transformations.
A more general function is
\begin{equation}
P(X)_i = Ax_i + \sum_{j \in N(x_i, X)} B_{(x_j, x_i)} x_j + c
\end{equation}
where $N(x_i, X)$ is the set of index of neighbors within the set $X$. It is trivial to show that this function is permutation equivariant.
Then, can the function family the authors used in the paper approximate this function? No.
Can the proposed permutation equivariant function represent all function the authors used in the paper? Yes.
1) If $B=0$, then the proposed function becomes MLP.
2) If $A=0, N(x_i, X) = [n]$ and $B_{(x_j, x_i)} \leftarrow B$, then this is $\mathbf{1}\mathbf{1}^TXB$, the global aggregation function.
Also, this is the actual function that a lot of people are interested in. Let me go over few more examples.
3) If $N(x_i, X) = $adjacency on a graph and $B_{(x_j, x_i)} \leftarrow B$, then this is a graph neural network "convolution" (it is not a convolution)
Example adjacency $N(x_i, X) = \{j \;| \; \|x_i - x_j\|_p < \delta, x_j \in X\}$.
\begin{equation}
\text{GraphOp}(X)_i = Ax_i + \sum_{j \in \{j \;| \; \|x_i - x_j\|_p < \delta, x_j \in X\}} Bx_j + c
\end{equation}
4) If $x_i = [r,g,b,u,v]$ where $[r,g,b]$ is the color, $[u,v]$ is the pixel coordinate and $N(x_i, X) =$ pixel neighbors within some kernel size, $B(x_j, x_i)$ to be the block diagonal matrix only for the first three dimensions and 0 for the rest, then this is the 2D convolution.
Again, the above function is a more general permutation equivariant function that can represent: a graph neural network layer, a convolution, MLP, global pooling and is one of the most widely used functions in the ML community, not MLP + global aggregation.
Regarding the experiment metrics and plots:
On the Knapsack test, the metric of interest is not the accuracy of individual prediction. Rather, whether the network has successfully predicted the optimal solution, or how close the prediction is to the solution.
For example: success rate within the epsilon radius of the optimal solution while satisfying all the constraints. Fail otherwise. If these networks can truly solve these problems, authors should report the success rate while varying the threshold, not individual accuracy of the items which can be arbitrarily high by violating constraints.
Also, the authors should compare with few more graphnet + transmission layer (GraphNetST) baselines with the graph layers: $P(X)_i = Ax_i + \sum_{j \in N(x_i, X)} Bx_j + c$ and the same single transmission layer $\mathbf{1}\mathbf{1}^TXB$ in PointNetST.
PointNet is a specialization of graphnets and GraphNetST should be added as a baseline with reasonable adjacency.
Also experiment figures are extremely compact. Try using log scale or other lines to make the gaps wider.
Minor
I am quite confused with the name PointNetST. Authors claim adding one layer of DeepSet layer to a PointNet becomes PointNetST, but I see this as a special DeepSet with a single transmission layer. The convention is B -> B', not A + B -> A'. In this case, A: PointNet, B: DeepSet
Lemma 3 is too trivial.
The paper is not very self contained. Spare few lines of equations to define what are DeepSets and PointNetSeg in the paper and point out the difference since these networks are used throughout the paper extensively without proper mathematical definition.
P.2 power sum multi-symmetric polynomials. "For a vector $x \in R^K$ and a multi-index ..." I think it was moved out of the next paragraph since the same $x$ is defined again as $x \in R^n$ again in the next sentence.
Also, try using the consistent dimension for x throughout the paper, it confuses the reader. |
ICLR | Title
Approximating How Single Head Attention Learns
Abstract
Why do models often attend to salient words, and how does this evolve throughout training? We approximate model training as a two stage process: early on in training when the attention weights are uniform, the model learns to translate individual input word i to o if they co-occur frequently. Later, the model learns to attend to i while the correct output is o because it knows i translates to o. To formalize, we define a model property, Knowledge to Translate Individual Words (KTIW) (e.g. knowing that i translates to o), and claim that it drives the learning of the attention. This claim is supported by the fact that before the attention mechanism is learned, KTIW can be learned from word co-occurrence statistics, but not the other way around. Particularly, we can construct a training distribution that makes KTIW hard to learn, the learning of the attention fails, and the model cannot even learn the simple task of copying the input words to the output. Our approximation explains why models sometimes attend to salient words, and inspires a toy example where a multi-head attention model can overcome the above hard training distribution by improving learning dynamics rather than expressiveness. We end by discussing the limitation of our approximation framework and suggest future directions.
1 INTRODUCTION
The attention mechanism underlies many recent advances in natural language processing, such as machine translation Bahdanau et al. (2015) and pretraining Devlin et al. (2019). While many works focus on analyzing attention in already-trained models Jain & Wallace (2019); Vashishth et al. (2019); Brunner et al. (2019); Elhage et al. (2021); Olsson et al. (2022), little is understood about how the attention mechanism is learned via gradient descent at training time.
These learning dynamics are important, as standard, gradient-trained models can have very unique inductive biases, distinguishing them from more esoteric but equally accurate models. For example, in text classification, while standard models typically attend to salient (high gradient influence) words Serrano & Smith (2019), recent work constructs accurate models that attend to irrelevant words instead Wiegreffe & Pinter (2019); Pruthi et al. (2020). In machine translation, while the standard gradient descent cannot train a high-accuracy transformer with relatively few attention heads, we can construct one by first training with more heads and then pruning the redundant heads Voita et al. (2019); Michel et al. (2019). To explain these differences, we need to understand how attention is learned at training time.
Our work opens the black box of attention training, focusing on attention in LSTM Seq2Seq models Luong et al. (2015) (Section 2.1). Intuitively, if the model knows that the input individual word i translates to the correct output word o, it should attend to i to minimize the loss. This motivates us to investigate the model’s knowledge to translate individual words (abbreviated as KTIW), and we define a lexical probe β to measure this property.
We claim that KTIW drives the attention mechanism to be learned. This is supported by the fact that KTIW can be learned when the attention mechanism has not been learned (Section 3.2), but not the other way around (Section 3.3). Specifically, even when the attention weights are frozen to be uniform, probe β still strongly agrees with the attention weights of a standardly trained model. On the other hand, when KTIW cannot be learned, the attention mechanism cannot be learned. Particularly, we can construct a distribution where KTIW is hard to learn; as a result, the model fails to learn a simple task of copying the input to the output.
Now the problem of understanding how attention mechanism is learned reduces to understanding how KTIW is learned. Section 2.3 builds a simpler proxy model that approximates how KTIW is learned, and Section 3.2 verifies empirically that the approximation is reasonable. This proxy model is simple enough to analyze and we interpret its training dynamics with the classical IBM Translation Model 1 (Section 4.2), which translates individual word i to o if they co-occur more frequently.
To collapse this chain of reasoning, we approximate model training in two stages. Early on in training when the attention mechanism has not been learned, the model learns KTIW through word co-occurrence statistics; KTIW later drives the learning of the attention.
Using these insights, we explain why attention weights sometimes correlate with word saliency in binary text classification (Section 5.1): the model first learns to “translate” salient words into labels, and then attend to them. We also present a toy experiment (Section 5.2) where multi-head attention improves learning dynamics by combining differently initialized attention heads, even though a single head model can express the target function.
Nevertheless, “all models are wrong”. Even though our framework successfully explains and predicts the above empirical phenomena, it cannot fully explain the behavior of attention-based models, since approximations are after all less accurate. Section 6 identifies and discusses two key assumptions: (1) information of a word tends to stay in the local hidden state (Section 6.1) and (2) attention weights are free variables (Section 6.2). We discuss future directions in Section 7.
2 MODEL
Section 2.1 defines the LSTM with attention Seq2Seq architecture. Section 2.2 defines the lexical probe β, which measures the model’s knowledge to translate individual words (KTIW). Section 2.3 approximates how KTIW is learned early on in training by building a “bag of words” proxy model. Section 2.4 shows that our framework generalizes to binary classification.
2.1 MACHINE TRANSLATION MODEL
We use the dot-attention variant from Luong et al. (2015). The model maps from an input sequence {xl} with length L to an output sequence {yt} with length T . We first use LSTM encoders to embed {xl} ⊂ I and {yt} ⊂ O respectively, where I and O are input and output vocab space, and obtain encoder and decoder hidden states {hl} and {st}. Then we calculate the attention logits at,l by applying a learnable mapping from hl and st, and use softmax to obtain the attention weights αt,l:
at,l = s T t Whl; αt,l = eat,l∑L l′=1 e at,l′ . (1)
Next we sum the encoder hidden states {ht} weighted by the attention to obtain the “context vector” ct, concatenate it with the decoder st, and obtain the output vocab probabilities pt by applying a learnable neural network N with one hidden layer and softmax activation at the output, and train the model by minimizing the sum of negative log-likelihood of all the output words yt.
ct = L∑ l=1 αt,lhl; pt = N([ct, st]); L = − T∑ t=1 log pt,yt . (2)
2.2 LEXICAL PROBE β
We define the lexical probe βt,l as: βt,l := N([hl, st])yt , (3)
which means “the probability assigned to the correct word yt, if the network attends only to the input encoder state hl”. If we assume that hl only contains information about xl, β closely reflects KTIW, since β can be interpreted as “the probability that xl is translated to the output yt”.
Heuristically, to minimize the loss, the attention weights α should be attracted to positions with larger βt,l. Hence, we expect the learning of the attention to be driven by KTIW (Figure 1 left). We then discuss how KTIW is learned.
2.3 EARLY DYNAMICS OF LEXICAL KNOWLEDGE
To approximate how KTIW is learned early on in training, we build a proxy model by making a few simplifying assumptions. First, since attention weights are uniform early on in training, we replace the attention distribution with a uniform one. Second, since we are defining individual word translation, we assume that information about each word is localized to its corresponding hidden state. Therefore, similar to Sun & Lu (2020), we replace hl with an input word embedding exl ∈ Rd, where e represents the word embedding matrix and d is the embedding dimension. Third, to simplify analysis, we assume N only contains one linear layer W ∈ R|O|×d before softmax activation and ignore the decoder state st. Putting these assumptions together, we now define a new proxy model that produces output vocab probability pt := σ( 1L ∑L l=1 Wexl).
On a high level, this proxy averages the embeddings of the input “bag of words”, and produces a distribution over output vocabs to predict the output “bag of words”. This implies that the sets of input and output words for each sentence pair are sufficient statistics for this proxy. The probe βpx can be similarly defined as βpxt,l := σ(Wexl)yt .
We provide more intuitions on how this proxy learns in Section 4.
2.4 BINARY CLASSIFICATION MODEL
Binary classification can be reduced to “machine translation”, where T = 1 and |O| = 2. We drop the subscript t = 1 when discussing classification.
We use the standard architecture from Wiegreffe & Pinter (2019). After obtaining the encoder hidden states {ht}, we calculate the attention logits al by applying a feed-forward neural network with one hidden layer and take the softmax of a to obtain the attention weights α:
al = v T (ReLU(Qhl)); αl = eal∑L l′=1 e al′ , (4)
where Q and v are learnable.
We sum the hidden states {hl} weighted by the attention, feed it to a final linear layer and apply the sigmoid activation function (σ) to obtain the probability for the positive class
ppos = σ(WT L∑
l=1
alhl) = σ( L∑ l=1 αlW Thl). (5)
Similar to the machine translation model (Section 2.1), we define the “lexical probe”:
βl := σ((2y − 1)WThl), (6)
where y ∈ {0, 1} is the label and 2y − 1 ∈ {−1, 1} controls the sign. On a high level, Sun & Lu (2020) focuses on binary classification and provides almost the exact same arguments as ours. Specifically, their polarity score “sl” equals βl1−βl in our context, and they provide a more subtle analysis of how the attention mechanism is learned in binary classification.
3 EMPIRICAL EVIDENCE
We provide evidence that KTIW drives the learning of the attention early on in training: KTIW can be learned when the attention mechanism has not been learned (Section 3.2), but not the other way around (Section 3.3).
3.1 MEASURING AGREEMENT
We start by describing how to evaluate the agreement between quantities of interest, such as α and β. For any input-output sentence pair (xm, ym), for each output index t, αmt , β m t , β px,m t ∈ RL m all associate each input position l with a real number. Since attention weights and word alignment tend to be sparse, we focus on the agreement of the highest-valued position. Suppose u, v ∈ RL, we formally define the agreement of v with u as:
A(u, v) := 1[|{j|vj > vargmaxui}| < 5%L], (7)
which means “whether the highest-valued position (dimension) in u is in the top 5% highest-valued positions in v”. We average the A values across all output words on the validation set to measure the agreement between two model properties. We also report Kendall’s τ rank correlation coefficient in Appendix 2 for completeness.
We denote its random baseline as Â. Â is close to but not exactly 5% because of integer rounding.
Contextualized Agreement Metric. However, since different datasets have different sentence length distributions and variance of attention weights caused by random seeds, it might be hard to directly interpret this agreement metric. Therefore, we contextualize this metric with model performance. We use the standard method to train a model till convergence using T steps and denote its attention weights as α; next we train the same model from scratch again using another random seed. We denote its attention weights at training step τ as α̂(τ) and its performance as p̂(τ). Roughly speaking, when τ < T , both A(α, α̂(τ)) and p̂(τ) increase as τ increases. We define the contextualized agreement ξ as:
ξ(u, v) := p̂(inf{τ |A(α, α̂(τ)) > A(u, v)}). (8)
In other words, we find the training step τ0 where its attention weights α̂(τ0) and the standard attention weights α agrees more than u and v agrees, and report the performance at this iteration. We refer to the model performance when training finishes (τ = T ) as ξ∗.
Datasets. We evaluate the agreement metrics A and ξ on multiple machine translation and text classification datasets. For machine translation, we use Multi-30k (En-De), IWSLT’14 (De-En), and News Commentary v14 (En-Nl, En-Pt, and It-Pt). For text classification, we use IMDB Sentiment Analysis, AG News Corpus, 20 Newsgroups (20 NG), Stanford Sentiment Treebank, Amazon review,
and Yelp Open Data Set. All of them are in English. The details and citations of these datasets can be seen in the Appendix A.5. We use token accuracy1 to evaluate the performance of translation models and accuracy to evaluate the classification models.
Due to space limit we round to integers and include a subset of datasets in Table 1 for the main paper. Appendix Table 4 includes the full results.
3.2 KTIW LEARNS UNDER UNIFORM ATTENTION
Even when the attention mechanism has not been learned, KTIW can still be learned. We train the same model architecture with the attention weights frozen to be uniform, and denote its lexical probe as βuf . Across all tasks, A(α, βuf) and A(βuf , βpx) 2 significantly outperform the random baseline Â, and the contextualized agreement ξ(α, βuf) is also non-trivial. This indicates that 1) the proxy we built in Section 2.3 approximates KTIW and 2) even when the attention weights are uniform, KTIW is still learned.
3.3 ATTENTION FAILS WHEN KTIW FAILS
We consider a simple task of copying from the input to the output, and each input is a permutation of the same set of 40 vocab types. Under this training distribution, the proxy model provably cannot learn: every input-output pair contains the exact same set of input-output words.3 As a result, our framework predicts that KTIW is unlikely to be learned, and hence the learning of attention is likely to fail.
The training curves of learning to copy the permutations are in Figure 2 left, colored in red: the model sometimes fails to learn. For the control experiment, if we randomly sample and permute 40 vocabs from 60 vocab types as training samples, the model successfully learns (blue curve) from this distribution every time. Therefore, even if the model is able to express this task, it might fail to learn it when KTIW is not learned. The same qualitative conclusion holds for the training distribution that mixes permutations of two disjoint sets of words (Figure 2 middle), and Appendix A.3 illustrates the intuition.
For binary classification, it follows from the model definition that attention mechanism cannot be learned if KTIW cannot be learned, since
pcorrect = σ( L∑ l=1 αlσ −1(βl)); σ(x) =
1
1 + e−x , (9)
1Appendix Tables 5, 3, and 7 include results for BLEU. 2Empirically, βpx converges to the unigram weight of a bag-of-words logistic regression model, and hence
βpx does capture an interpretable notion of “keywords”. (Appendix A.10.) 3We provide more intuitions on this in Section 4
and the model needs to attend to positions with higher β, in order to predict correctly and minimize the loss. For completeness, we include results where we freeze β and find that the learning of the attention fails in Appendix A.6.
4 CONNECTION TO IBM MODEL 1
Section 2.3 built a simple proxy model to approximate how KTIW is learned when the attention weights are uniform early on in training, and Section 3.2 verified that such an approximation is empirically sound. However, it is still hard to intuitively reason about how this proxy model learns. This section provides more intuitions by connecting its initial gradient (Section 4.1) to the classical IBM Model 1 alignment algorithm Brown et al. (1993) (Section 4.2).
4.1 DERIVATIVE AT INITIALIZATION
We continue from the end of Section 2.3. For each input word i and output word o, we are interested in understanding the probability that i assigns to o, defined as:
θpxi,o := σ(Wei)o. (10)
This quantity is directly tied to βpx, since βpxt,l = θ px xl,yt . Using super-script m to index sentence pairs in the dataset, the total loss L is:
L = − ∑ m Tm∑ t=1 log(σ( 1 Lm Lm∑ l=1 Wexml )ymt ). (11)
Suppose each ei or Wo is independently initialized from a normal distribution N (0, Id/d) and we minimize L over W and e using gradient flow, then the value of e and W are uniquely defined for each continuous time step τ . By some straightforward but tedious calculations (details in Appendix A.2), the derivative of θi,o when the training starts is:
lim d→∞ ∂θpxi,o ∂τ (τ = 0) p→ 2(Cpxi,o − 1 |O| ∑ o′∈O Cpxi,o′). (12)
where p→ means convergence in probability and Cpxi,o is defined as
Cpxi,o := ∑ m Lm∑ l=1 Tm∑ t=1 1 Lm 1[xml = i]1[y m t = o]. (13)
Equation 12 tells us that βpxt,l = θ px xl,yt is likely to be larger if Cxl,yt is large. The definition of C seems hard to interpret from Equation 13, but in the next subsection we will find that this quantity naturally corresponds to the “count table” used in the classical IBM 1 alignment learning algorithm.
4.2 IBM MODEL 1 ALIGNMENT LEARNING
The classical alignment algorithm aims to learn which input word is responsible for each output word (e.g. knowing that y2 “movie” aligns to x2 “Film” in Figure 1 upper left), from a set of input-output sentence pairs. IBM Model 1 Brown et al. (1993) starts with a 2-dimensional count table CIBM indexed by i ∈ I and o ∈ O, denoting input and output vocabs. Whenever vocab i and o co-occurs in an input-output pair, we add 1L to the C IBM i,o entry (step 1 and 2 in Figure 1 right). After updating CIBM for the entire dataset, CIBM is exactly the same as Cpx defined in Equation 13. We drop the super-script of C to keep the notation uncluttered.
Given C, the classical model estimates a probability distribution of “what output word o does the input word i translate to” (Figure 1 right step 3) as
Trans(o|i) = Ci,o∑ o′ Ci,o′ . (14)
In a pair of sequences ({xl}, {yt}), the probability βIBM that xl is translated to the output yt is:
βIBMt,l := Trans(yt|xl), (15)
and the alignment probability αIBM that “xl is responsible for outputting yt versus other xl′” is
αIBM(t, l) = βIBMt,l∑L
l′=1 β IBM t,l′
, (16)
which monotonically increases with respect to βIBMt,l . See Figure 1 right step 5.
4.3 VISUALIZING AFOREMENTIONED TASKS
Figure 1 (right) visualizes the count table C for the machine translation task, and illustrates how KTIW is learned and drives the learning of attention. We provide similar visualization for why KTIW is hard to learn under a distribution of vocab permutations (Section 3.3) in Figure 3, and how word polarity is learned in binary classification (Section 2.4) in Figure 4.
5 APPLICATION
5.1 INTERPRETABILITY IN CLASSIFICATION
We use gradient based method Ebrahimi et al. (2018) to approximate the influence ∆l for each input word xl. The column A(∆, βuf) reports the agreement between ∆ and βuf , and it significantly outperforms the random baseline. Since KTIW initially drives the attention mechanism to be learned, this explains why attention weights are correlated with word saliency on many classification tasks, even though the training objective does not explicitly reward this.
5.2 MULTI-HEAD IMPROVES TRAINING DYNAMICS
We saw in Section 3.3 that learning to copy sequences under a distribution of permutations is hard and the model can fail to learn; however, sometimes it is still able to learn. Can we improve learning and overcome this hard distribution by ensembling several attention parameters together?
We introduce a multi-head attention architecture by summing the context vector ct obtained by each head. Suppose there are K heads each indexed by k, similar to Section 2.1:
a (k) t,l = s T t W (k)hl; α (k) t,l = ea (k) t,l∑L
l′=1 e α
(k) t,l′ , (17)
and the context vector and final probability pt defined as:
c (k) t = L∑ l=1 α (k) t,l hl; pt = N([ K∑ k=1 c (k) t , dt]), (18)
where W (k) are different learn-able parameters.
We call W (k)init a good initialization if training with this single head converges, and bad otherwise. We use rejection sampling to find good/bad head initializations and combine them to form 8-head (K = 8) attention models. We experiment with 3 scenarios: (1) all head initializations are bad, (2) only one initialization is good, and (3) initializations are sampled independently at random.
Figure 2 right presents the training curves. If all head initializations are bad, the model fails to converge (red). However, as long as one of the eight initializations is good, the model can converge (blue). As the number of heads increases, the probability that all initializations are bad is exponentially small if all initializations are sampled independently; hence the model converges with very high probability (green). In this experiment, multi-head attention improves not by increasing expressiveness, since one head is sufficient to accomplish the task, but by improving the learning dynamics.
6 ASSUMPTIONS
We revisit the approximation assumptions used in our framework. Section 6.1 discusses whether the lexical probe βt,l necessarily reflects local information about input word xl, and Section 6.2 discusses whether attention weights can be freely optimized to attend to large β. These assumptions are accurate enough to predict phenomenon in Section 3 and 5, but they are not always true and hence warrant more future researches. We provide simple examples where these assumptions might fail.
6.1 β REMAINS LOCAL
We use a toy classification task to show that early on in training, expectantly, βuf is larger near positions that contain the keyword. However, unintuitively, βufL (β at the last position in the sequence) will become the largest if we train the model for too long under uniform attention weights.
In this toy task, each input is a length-40 sequence of words sampled from {1, . . . , 40} uniformly at random; a sequence is positive if and only if the keyword “1” appears in the sequence. We restrict “1”
to appear only once in each positive sequence, and use rejection sampling to balance positive and negative examples. Let l∗ be the position where xl∗ = 1.
For the positive sequences, we examine the log-odd ratio γl before the sigmoid activation in Equation 5, since β will be all close to 1 and comparing γ would be more informative: γl := log
βufl 1−βufl .
We measure four quantities: 1) γl∗ , the log-odd ratio if the model only attends to the key word position, 2) γl∗+1, one position after the key word position, 3) γ̄ := ∑L l=1 γl L , if attention weights are uniform, and 4) γL if the model attends to the last hidden state. If the γl only contains information about word xl, we should expect:
Hypothesis 1 : γl∗ ≫ γ̄ ≫ γL ≈ γl∗+1. (19)
However, if we accept the conventional wisdom that hidden states contain information about nearby words Khandelwal et al. (2018), we should expect:
Hypothesis 2 : γl∗ ≫ γl∗+1 ≫ γ̄ ≈ γL. (20)
To verify these hypotheses, we plot how γl∗ , γl∗+1, γ̄, and γL evolve as training proceeds in Figure 5. Hypothesis 2 is indeed true when training starts; however, we find the following to be true asymptotically:
Observation 3 : γL ≫ γl∗+1 ≫ γ̄ ≈ γl∗ . (21)
which is wildly different from Hypothesis 2. If we train under uniform attention weights for too long, the information about keywords can freely flow to other non-local hidden states.
6.2 ATTENTION WEIGHTS ARE FREE VARIABLES
In Section 2.1 we assumed that attention weights α behave like free variables that can assign arbitrarily high probabilities to positions with larger β. However, α is produced by a model, and sometimes learning the correct α can be challenging.
Let π be a random permutation of integers from 1 to 40, and we want to learn the function f that permutes the input with π:
f([x1, x2, . . . x40]) := [xπ(1), xπ(2) . . . xπ(40)]. (22) Input x are randomly sampled from a vocab of size 60 as in Section 3.3. Even though βuf behaves exactly the same for these two tasks, sequence copying is much easier to learn than permutation function: while the model always reaches perfect accuracy in the former setting within 300 iterations, it always fails in the latter.
LSTM has a built-in inductive bias to learn monotonic attention.
7 CONCLUSIONS
Our work tries to understand the black box of attention training. Early on in training, the LSTM attention models first learn how to translation individual words from bag of words co-occurrence statistics, which then drives the learning of the attention. Our framework explains why attention weights obtained by standard training often correlate with saliency, and how multi-head attention can increase performance by improving the training dynamics rather than expressiveness. These phenomena cannot be explained if we treated the training process as a black box.
8 ETHICAL CONSIDERATIONS
We present a new framework for understanding and predicting behaviors of an existing technology: the attention mechanism in recurrent neural networks. We do not propose any new technologies or any new datasets that could directly raise ethical questions. However, it is useful to keep in mind that our framework is far from solving the question of neural network interpretability, and should not be interpreted as ground truth in high stake domains like medicine or recidivism. We are aware and very explicit about the limitations of our framework, which we made clear in Section 6.
9 REPRODUCABILITY STATEMENT
To promote reproducibility, we provide extensive results in the appendix and describe all experiments in detail. We also attach source code for reproducing all experiments to the supplemental of this submission.
A APPENDICES
A.1 HEURISTIC THAT α ATTENDS TO LARGER β
It is a heuristic rather than a rigorous theorem that attention α is attracted to larger β. There are two reasons. First, there is a non-linear layer after the averaging the hidden states, which can interact in an arbitrarily complex way to break this heuristic. Second, even if there are no non-linear operations after hidden state aggregation, the optimal attention that minimizes the loss does not necessarily assign any probability to the position with the largest β value when there are more than two output vocabs.
Specifically, we consider the following model: pt = σ(Wc ∑ l=1 αt,lhl +Wsst) = σ( ∑ l=1 αt,lγl + γs), (23)
where Wc and Ws are learnable weights, and γ defined as:
γl := Wchl; γs := Wsst ⇒ βt,l = σ(γl + γs)yt . (24)
Consider the following scenario that outputs a probability distribution p over 3 output vocabs and γs is set to 0:
p = σ(α1γ1 + α2γ2 + α3γ3), (25)
where γl=1,2,3 ∈ R|O|=3 are the logits, α is a valid attention probability distribution, σ is the softmax, and p is the probability distribution produced by this model. Suppose
γ1 = [0, 0, 0], γ2 = [0,−10, 5], γ3 = [0, 5,−10] (26)
and the correct output is the first output vocab (i.e. the first dimension). Therefore, we take the softmax of γl and consider the first dimension:
βl=1 = 1
3 > βl=2 = βl=3 ≈ e−5. (27)
We calculate “optimal α” αopt: the optimal attention weights that can maximize the correct output word probability p0 and minimize the loss. We find that α opt 2 = α opt 3 = 0.5, while α opt 1 = 0. In this example, the optimal attention assigns 0 weight to the position l with the highest βl.
Fortunately, such pathological examples rarely occur in real datasets, and the optimal α are usually attracted to positions with higher β. We empirically verify this for the below variant of machine translation model on Multi30K.
As before, we obtain the context vector ct. Instead of concatenating ct and dt and pass it into a non-linear neural network N , we add them and apply a linear layer with softmax after it to obtain the output word probability distribution
pt = σ(W (ct + dt)). (28)
This model is desirable because we can now provably find the optimal α using gradient descent (we delay the proof to the end of this subsection). Additionally, this model has comparable performance with the variant from our main paper (Section 2.1), achieving 38.2 BLEU score, vs. 37.9 for the model in our main paper. We use αopt to denote the attention that can minimize the loss, and we find that A(αopt, β) = 0.53. β do strongly agree with αopt. Now we are left to show that we can use gradient descent to find the optimal attention weights to minimize the loss. We can rewrite pt as
pt = σ( L∑ l=1 αlWhl +Wdt). (29)
We define γl := Whl; γs := Wdt. (30)
Without loss of generality, suppose the first dimension of γ1...L, γs are all 0, and the correct token we want to maximize probability for is the first dimension, then the loss for the output word is
L = log(1 + g(α)), (31) where
g(α) := ∑
o∈O,o ̸=0
eα T γ′o+γs,o , (32)
where γ′o = [γ1,o . . . γl,o . . . γL,o] ∈ RL. (33)
Since α is defined within the convex probability simplex and g(α) is convex with respect to α, the global optima αopt can be found by gradient descent.
A.2 CALCULATING ∂θi,o∂τ
We drop the px super-script of θ to keep the notation uncluttered. We copy the loss function here to remind the readers:
L = − ∑ m Tm∑ t=1 log(σ( 1 Lm Lm∑ l=1 Wexml )ymt ). (34)
and since we optimize W and e with gradient flow,
∂W ∂τ := − L ∂W ; ∂e ∂τ := − L ∂e . (35)
We first define the un-normalized logits γ̂ and then take the softmax.
θ̂ = We, (36)
then ∂θ̂
∂τ =
∂(We) ∂τ = −W ∂e ∂τ − ∂W ∂τ e. (37)
We first analyze ϵ := −W ∂e∂τ . Since ϵ ∈ R |I|×|O|, we analyze each entry ϵi,o. Since differentiation operation and left multiplication by matrix W is linear, we analyze each individual loss term in Equation 34 and then sum them up.
We define
pm := σ( 1
Lm Lm∑ l=1 Wexml ) (38)
and Lmt := − log(pmymt ); ϵ m t,i,o := Wo ∂Lmt ∂ei . (39) Hence,
L = ∑ m Tm∑ t=1 Lmt ; ϵi,o = ∑ m Tm∑ t=1 ϵmt,i,o. (40)
Therefore,
−∂L m t
∂ei =
1
Lm Lm∑ l=1 1[xml = i](Wymt − |O|∑ o=1 pmo Wo). (41)
Hence,
ϵmt,i,ymt = −W T ymt ∂Lmt ∂ei = 1 Lm Lm∑ l=1 1[xml = i] (42)
(||Wymt || 2 2 − |O|∑ o=1 pmo W T ymt Wo),
while for o′ ̸= ymt ,
ϵmt,i,o′ = −WTo′ ∂Lmt ∂ei = 1 Lm Lm∑ l=1 1[xml = i] (43)
(WTo′Wymt − |O|∑ o=1 pmo W T o′Wo).
If Wo and ei are each sampled i.i.d. from N (0, Id/d), then by central limit theorem:
∀o ̸= o′, √ dWTo Wo′ p→ N (0, 1), (44)
∀o, i, √ dWTo ei
p→ N (0, 1), (45) and
∀o, √ d(||Wo||22 − 1)
p→ N (0, 2). (46) Therefore, when τ = 0,
lim d→∞
ϵmt,i,o p→ 1
Lm Lm∑ l=1 1[xml = i](1[y t l = o]− 1 |O| ). (47)
Summing over all the ϵmt,i,o terms, we have that
ϵi,o = Ci,o − 1 |O| ∑ o′ Ci,o′ , (48)
where C is defined as
Ci,o := ∑ m Lm∑ l=1 Tm∑ t=1 1 Lm 1[xml = i]1[y m t = o]. (49)
We find that −∂W∂τ e converges exactly to the same value. Hence
∂θ̂i,o ∂τ = ∂We ∂τ = 2(Ci,o − 1 |O| ∑ o′ Ci,o′). (50)
Since limd→∞ θ(τ = 0) p→ 1|O|1 |I|×|O|, by chain rule,
lim d→∞ ∂γi,o ∂τ (τ = 0) p→ 2(Ci,o − 1 |O| ∑ o′∈O Ci,o′). (51)
A.3 MIXTURE OF PERMUTATIONS
For this experiment, each input is either a random permutation of the set {1 . . . 40}, or a random permutation of the set {41 . . . 80}. The proxy model can easily learn whether the input words are less than 40 and decide whether the output words are all less than 40. However, βpx is still the same for every position; as a result, the attention and hence the model fail to learn. The count table C can be see in Figure 6.
A.4 ADDITIONAL TABLES FOR COMPLETENESS
We report several variants of Table 1. We chose to use token accuracy to contextualize the agreement metric in the main paper, because the errors would accumulate much more if we use a not-fully trained model to auto-regressively generate output words.
• Table 2 contains the same results as Table 1, except that its agreement score A(u, v) is now Kendall Tau rank correlation coefficient, which is a more popular metric.
• Table 4 contains the same results as Table 1, except that results are now rounded to two decimal places.
• Table 6 consists of the same results as Table 1, except that the statistics is calculated over the training set rather than the validation set.
• Table 3, Table 5, and Table 7 contain the translation results from the above 3 mentioned tables respectively, except that p̂ is defined as BLEU score rather than token accuracy, and hence the contextualized metric interpretation ξ changes correspondingly.
A.5 DATASET DESCRIPTION
We summarize the datasets that we use for classification and machine translation. See Table 8 for details on train/test splits and median sequence lengths for each dataset.
IMDB Sentiment Analysis Maas et al. (2011) A sentiment analysis data set with 50,000 (25,000 train and 25,000 test) IMDB movie reviews and their corresponding positive or negative sentiment.
AG News Corpus Zhang et al. (2015) 120,000 news articles and their corresponding topic (world, sports, business, or science/tech). We classify between the world and business articles.
20 Newsgroups 4 A news data set containing around 18,000 newsgroups articles split between 20 different labeled categories. We classify between baseball and hocky articles.
Stanford Sentiment Treebank Socher et al. (2013) A data set for classifying the sentiment of movie reviews, labeled on a scale from 1 (negative) to 5 (positive). We remove all movies labeled as 3, and classify between 4 or 5 and 1 or 2.
Multi Domain Sentiment Data set 5 Approximately 40,000 Amazon reviews from various product categories labeled with a corresponding positive or negative label. Since some of the sequences are particularly long, we only use sequences of length less than 400 words.
Yelp Open Data Set 6 20,000 Yelp reviews and their corresponding star rating from 1 to 5. We classify between reviews with rating ≤ 2 and ≥ 4. Multi-30k Elliott et al. (2016) English to German translation. The data is from translation image captions.
IWSLT’14 Cettolo et al. (2015) German to English translation. The data is from translated TED talk transcriptions.
News Commentary v14 Cettolo et al. (2015) A collection of translation news commentary datasets in different languages from WMT19 7. We use the following translation splits: English-Dutch (En-Nl), English-Portuguese (En-Pt), and Italian-Portuguese (It-Pt). In pre-processing for this dataset, we removed all purely numerical examples.
A.6 α FAILS WHEN β IS FROZEN
For each classification task we initialize a random model and freeze all parameters except for the attention layer (frozen β model). We then compute the correlation between this trained attention (defined as αfr) and the normal attention α. Table 9 reports this correlation at the iteration where αfr is most correlated with α on the validation set. As shown in Table 9, the left column is consistently lower than the right column. This indicates that the model can learn output relevance without attention, but not vice versa.
A.7 TRAINING βuf
We find that A(α, βuf(τ)) first increases and then decreases as training proceeds (i.e. τ increases), so we chose the maximum agreement to report in Table 1 over the course of training. Since this trend is consistent across all datasets, our choice minimally inflates the agreement measure, and is comparable to the practice of reporting dev set results. As discussed in Section 6.1, training under uniform attention for too long might bring unintuitive results,
A.8 MODEL AND TRAINING DETAILS
Classification Our model uses dimension 300 GloVe-6B pre-trained embeddings to initialize the token embeddings where they aligned with our vocabulary. The sequences are encoded with a 1 layer bidirectional LSTM of dimension 256. The rest of the model, including the attention mechanism, is exactly as described in 2.4. Our model has 1,274,882 parameters excluding embeddings. Since each classification set has a different vocab size each model has a slightly different parameter count when considering embeddings: 19,376,282 for IMDB, 10,594,382 for AG News, 5,021,282 for 20
4http://qwone.com/ jason/20Newsgroups/ 5https://www.cs.jhu.edu/ mdredze/datasets/sentiment/ 6https://www.yelp.com/dataset 7http://www.statmt.org/wmt19/translation-task.html
Newsgroups, 4,581,482 for SST, 13,685,282 for Yelp, 12,407,882 for Amazon, and 2,682,182 for SMS.
Translation We use a a bidirectional two layer bi-LSTM of dimension 256 to encode the source and the use last hidden state hL as the first hidden state of the decoder. The attention and outputs are then calculated as described in 2. The learn-able neural network before the outputs that is mentioned in Section 2, is a 1 hidden layer model with ReLU non-linearity. The hidden layer is dimension 256. Our model contains 6,132,544 parameters excluding embeddings and 8,180,544 including embeddings on all datasets.
Permutation Copying We use single directional single layer LSTM with hidden dimension 256 for both the encoder and the decoder.
Classification Procedure For all classification datasets we used a batch size of 32. We trained for 4000 iterations on each dataset. For each dataset we train on the pre-defined training set if the dataset has one. Additionally, if a dataset had a predefined test set, we randomly sample at most 4000 examples from this test set for validation. Specific dataset split sizes are given in Table 8.
Classification Evaluation We evaluated each model at steps 0, 10, 50, 100, 150, 200, 250, and then every 250 iterations after that.
Classification Tokenization We tokenized the data at the word level. We mapped all words occurring less than 3 times in the training set to <unk>. For 20 Newsgroups and AG News we mapped all non-single digit integer ”words” to <unk>. For 20 Newsgroups we also split words with the ” ” character.
Classification Training We trained all classification models on a single GPU. Some datasets took slightly longer to train than others (largely depending on average sequence length), but each train took at most 45 minutes.
Translation Hyper Parameters For translation all hidden states in the model are dimension 256. We use the sequence to sequence architecture described above. The LSTMs used dropout 0.5.
Translation Procedure For all translation tasks we used batch size 16 when training. For IWSLT’14 and Multi-30k we used the provided dataset splits. For the News Commentary v14 datasets we did a 90-10 split of the data for training and validation respectively.
Translation Evaluation We evaluated each model at steps 0, 50, 100, 500, 1000, 1500, and then every 2000 iterations after that.
Translation Training We trained all translation models on a single GPU. IWSLT’14, and the News Commentary datasets took approximately 5-6 hours to train, and multi-30k took closer to 1 hour to train.
Translation Tokenization We tokenized both translation datasets using the Sentence-Piece tokenizer trained on the corresponding train set to a vocab size of 8,000. We used a single tokenization for source and target tokens. And accordingly also used the same matrix of embeddings for target and source sequences.
A.9 A NOTE ON SMS DATASET
In addition to the classification datasets reported in the tables, we also ran experiments on the SMS Spam Collection V.1 dataset 8. The attention learned from this dataset was very high variance, and so two different random seeds would consistently produce attentions that did not correlate much. The dataset itself was also a bit of an outlier; it had shorter sequence lengths than any of the other datasets (median sequence length 13 on train and validation set), it also had the smallest training set out of all our datasets (3500 examples), and it had by far the smallest vocab (4691 unique tokens). We decided not to include this dataset in the main paper due to these unusual results and leave further exploration to future works.
A.10 LOGISTIC REGRESSION PROXY MODEL
Our proxy model can be shown to be equivalent to a bag-of-words logistic regression model in the classification case. Specifically, we define a bag-of-words logistic regression model to be:
∀t, pt = σ(βlogx). (52)
where x ∈ R|I|, βlog ∈ R|O|×|I|, and σ is the softmax function. The entries in x are the number of times each word occurs in the input sequence, normalized by the sequence length. and βlog is learned. This is equivalent to:
8http://www.dt.fee.unicamp.br/ tiago/smsspamcollection/
∀t, pt = σ( 1
L L∑ l=1 βlogxl ). (53)
Here βlogi indicates the ith column of β log; these are the entries in βlog corresponding to predictions for the ith word in the vocab. Now it is easy to arrive at the equivalence between logistic regression and our proxy model. If we restrict the rank of βlog to be at most min(d, |O|, |I|) by factoring it as βlog = WE where W ∈ R|O|×d and E ∈ Rd×|I|, then the logistic regression looks like:
∀t, pt = σ( 1
L L∑ l=1 WExl), (54)
which is equivalent to our proxy model:
∀t, pt = σ( 1
L L∑ l=1 Wexl). (55)
Since d = 256 for the proxy model, which is larger than |O| = 2 in the classification case, the proxy model is not rank limited and is hence fully equivalent to the logistic regression model. Therefore the βpx can be interpreted as ”keywords” in the same way that the logistic regression weights can.
To empirically verify this equivalence, we trained a logistic regression model with ℓ2 regularization on each of our classification datasets. To pick the optimal regularization level, we did a sweep of regularization coefficients across ten orders of magnitude and picked the one with the best validation accuracy. We report results for A(βuf , βlog) in comparison to A(βuf , βpx) in Table 10 9. Note that these numbers are similar but not exactly equivalent. The reason is that the proxy model did not use ℓ2 regularization, while logistic regression did.
9These numbers were obtained from a retrain of all the models in the main table, so for instance, the LSTM model used to produce βuf might not be exactly the same as the one used for the results in all the other tables due to random seed difference. | 1. What is the focus of the paper regarding the attention mechanism in neural networks?
2. What are the strengths of the proposed approach, particularly in terms of the analysis and connections drawn?
3. What are the weaknesses of the paper, especially regarding the assumptions made and the limitations of the analysis?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any questions or concerns raised by the reviewer regarding the methodology, interpretations, or conclusions of the paper? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The paper aims to understand the training dynamics of the Attention mechanism through lexical prob
β
and its learning proxy model.
Strengths And Weaknesses
Strength:
The paper did some rigorous analysis of the attention training mechanism and drew a connection to the word alignment of the classical models.
There is a connection to interpretable classification for the mechanism described in the paper.
Weakness:
The applicability of such an understanding is now clear.
The analysis has a lot of underlying assumption (almost all of them are pointed out by the authors) which does not usually hold.
Clarity, Quality, Novelty And Reproducibility
Why a symmetric KL between the distributions is not used for measuring agreement? Why only look at the top value if only important in this scenario? Especially when the assumption - "the word's information is only bounded into its local hidden state", does not hold then just looking at top attention weight might not be the correct thing to do.
Does the inductive bias of LSTM plays any role in this analysis?
Even if you prove that training of attention does not occur until the training of the proxy of the KTIW, does that immediately mean KTIW drives the attention training? |
ICLR | Title
Approximating How Single Head Attention Learns
Abstract
Why do models often attend to salient words, and how does this evolve throughout training? We approximate model training as a two stage process: early on in training when the attention weights are uniform, the model learns to translate individual input word i to o if they co-occur frequently. Later, the model learns to attend to i while the correct output is o because it knows i translates to o. To formalize, we define a model property, Knowledge to Translate Individual Words (KTIW) (e.g. knowing that i translates to o), and claim that it drives the learning of the attention. This claim is supported by the fact that before the attention mechanism is learned, KTIW can be learned from word co-occurrence statistics, but not the other way around. Particularly, we can construct a training distribution that makes KTIW hard to learn, the learning of the attention fails, and the model cannot even learn the simple task of copying the input words to the output. Our approximation explains why models sometimes attend to salient words, and inspires a toy example where a multi-head attention model can overcome the above hard training distribution by improving learning dynamics rather than expressiveness. We end by discussing the limitation of our approximation framework and suggest future directions.
1 INTRODUCTION
The attention mechanism underlies many recent advances in natural language processing, such as machine translation Bahdanau et al. (2015) and pretraining Devlin et al. (2019). While many works focus on analyzing attention in already-trained models Jain & Wallace (2019); Vashishth et al. (2019); Brunner et al. (2019); Elhage et al. (2021); Olsson et al. (2022), little is understood about how the attention mechanism is learned via gradient descent at training time.
These learning dynamics are important, as standard, gradient-trained models can have very unique inductive biases, distinguishing them from more esoteric but equally accurate models. For example, in text classification, while standard models typically attend to salient (high gradient influence) words Serrano & Smith (2019), recent work constructs accurate models that attend to irrelevant words instead Wiegreffe & Pinter (2019); Pruthi et al. (2020). In machine translation, while the standard gradient descent cannot train a high-accuracy transformer with relatively few attention heads, we can construct one by first training with more heads and then pruning the redundant heads Voita et al. (2019); Michel et al. (2019). To explain these differences, we need to understand how attention is learned at training time.
Our work opens the black box of attention training, focusing on attention in LSTM Seq2Seq models Luong et al. (2015) (Section 2.1). Intuitively, if the model knows that the input individual word i translates to the correct output word o, it should attend to i to minimize the loss. This motivates us to investigate the model’s knowledge to translate individual words (abbreviated as KTIW), and we define a lexical probe β to measure this property.
We claim that KTIW drives the attention mechanism to be learned. This is supported by the fact that KTIW can be learned when the attention mechanism has not been learned (Section 3.2), but not the other way around (Section 3.3). Specifically, even when the attention weights are frozen to be uniform, probe β still strongly agrees with the attention weights of a standardly trained model. On the other hand, when KTIW cannot be learned, the attention mechanism cannot be learned. Particularly, we can construct a distribution where KTIW is hard to learn; as a result, the model fails to learn a simple task of copying the input to the output.
Now the problem of understanding how attention mechanism is learned reduces to understanding how KTIW is learned. Section 2.3 builds a simpler proxy model that approximates how KTIW is learned, and Section 3.2 verifies empirically that the approximation is reasonable. This proxy model is simple enough to analyze and we interpret its training dynamics with the classical IBM Translation Model 1 (Section 4.2), which translates individual word i to o if they co-occur more frequently.
To collapse this chain of reasoning, we approximate model training in two stages. Early on in training when the attention mechanism has not been learned, the model learns KTIW through word co-occurrence statistics; KTIW later drives the learning of the attention.
Using these insights, we explain why attention weights sometimes correlate with word saliency in binary text classification (Section 5.1): the model first learns to “translate” salient words into labels, and then attend to them. We also present a toy experiment (Section 5.2) where multi-head attention improves learning dynamics by combining differently initialized attention heads, even though a single head model can express the target function.
Nevertheless, “all models are wrong”. Even though our framework successfully explains and predicts the above empirical phenomena, it cannot fully explain the behavior of attention-based models, since approximations are after all less accurate. Section 6 identifies and discusses two key assumptions: (1) information of a word tends to stay in the local hidden state (Section 6.1) and (2) attention weights are free variables (Section 6.2). We discuss future directions in Section 7.
2 MODEL
Section 2.1 defines the LSTM with attention Seq2Seq architecture. Section 2.2 defines the lexical probe β, which measures the model’s knowledge to translate individual words (KTIW). Section 2.3 approximates how KTIW is learned early on in training by building a “bag of words” proxy model. Section 2.4 shows that our framework generalizes to binary classification.
2.1 MACHINE TRANSLATION MODEL
We use the dot-attention variant from Luong et al. (2015). The model maps from an input sequence {xl} with length L to an output sequence {yt} with length T . We first use LSTM encoders to embed {xl} ⊂ I and {yt} ⊂ O respectively, where I and O are input and output vocab space, and obtain encoder and decoder hidden states {hl} and {st}. Then we calculate the attention logits at,l by applying a learnable mapping from hl and st, and use softmax to obtain the attention weights αt,l:
at,l = s T t Whl; αt,l = eat,l∑L l′=1 e at,l′ . (1)
Next we sum the encoder hidden states {ht} weighted by the attention to obtain the “context vector” ct, concatenate it with the decoder st, and obtain the output vocab probabilities pt by applying a learnable neural network N with one hidden layer and softmax activation at the output, and train the model by minimizing the sum of negative log-likelihood of all the output words yt.
ct = L∑ l=1 αt,lhl; pt = N([ct, st]); L = − T∑ t=1 log pt,yt . (2)
2.2 LEXICAL PROBE β
We define the lexical probe βt,l as: βt,l := N([hl, st])yt , (3)
which means “the probability assigned to the correct word yt, if the network attends only to the input encoder state hl”. If we assume that hl only contains information about xl, β closely reflects KTIW, since β can be interpreted as “the probability that xl is translated to the output yt”.
Heuristically, to minimize the loss, the attention weights α should be attracted to positions with larger βt,l. Hence, we expect the learning of the attention to be driven by KTIW (Figure 1 left). We then discuss how KTIW is learned.
2.3 EARLY DYNAMICS OF LEXICAL KNOWLEDGE
To approximate how KTIW is learned early on in training, we build a proxy model by making a few simplifying assumptions. First, since attention weights are uniform early on in training, we replace the attention distribution with a uniform one. Second, since we are defining individual word translation, we assume that information about each word is localized to its corresponding hidden state. Therefore, similar to Sun & Lu (2020), we replace hl with an input word embedding exl ∈ Rd, where e represents the word embedding matrix and d is the embedding dimension. Third, to simplify analysis, we assume N only contains one linear layer W ∈ R|O|×d before softmax activation and ignore the decoder state st. Putting these assumptions together, we now define a new proxy model that produces output vocab probability pt := σ( 1L ∑L l=1 Wexl).
On a high level, this proxy averages the embeddings of the input “bag of words”, and produces a distribution over output vocabs to predict the output “bag of words”. This implies that the sets of input and output words for each sentence pair are sufficient statistics for this proxy. The probe βpx can be similarly defined as βpxt,l := σ(Wexl)yt .
We provide more intuitions on how this proxy learns in Section 4.
2.4 BINARY CLASSIFICATION MODEL
Binary classification can be reduced to “machine translation”, where T = 1 and |O| = 2. We drop the subscript t = 1 when discussing classification.
We use the standard architecture from Wiegreffe & Pinter (2019). After obtaining the encoder hidden states {ht}, we calculate the attention logits al by applying a feed-forward neural network with one hidden layer and take the softmax of a to obtain the attention weights α:
al = v T (ReLU(Qhl)); αl = eal∑L l′=1 e al′ , (4)
where Q and v are learnable.
We sum the hidden states {hl} weighted by the attention, feed it to a final linear layer and apply the sigmoid activation function (σ) to obtain the probability for the positive class
ppos = σ(WT L∑
l=1
alhl) = σ( L∑ l=1 αlW Thl). (5)
Similar to the machine translation model (Section 2.1), we define the “lexical probe”:
βl := σ((2y − 1)WThl), (6)
where y ∈ {0, 1} is the label and 2y − 1 ∈ {−1, 1} controls the sign. On a high level, Sun & Lu (2020) focuses on binary classification and provides almost the exact same arguments as ours. Specifically, their polarity score “sl” equals βl1−βl in our context, and they provide a more subtle analysis of how the attention mechanism is learned in binary classification.
3 EMPIRICAL EVIDENCE
We provide evidence that KTIW drives the learning of the attention early on in training: KTIW can be learned when the attention mechanism has not been learned (Section 3.2), but not the other way around (Section 3.3).
3.1 MEASURING AGREEMENT
We start by describing how to evaluate the agreement between quantities of interest, such as α and β. For any input-output sentence pair (xm, ym), for each output index t, αmt , β m t , β px,m t ∈ RL m all associate each input position l with a real number. Since attention weights and word alignment tend to be sparse, we focus on the agreement of the highest-valued position. Suppose u, v ∈ RL, we formally define the agreement of v with u as:
A(u, v) := 1[|{j|vj > vargmaxui}| < 5%L], (7)
which means “whether the highest-valued position (dimension) in u is in the top 5% highest-valued positions in v”. We average the A values across all output words on the validation set to measure the agreement between two model properties. We also report Kendall’s τ rank correlation coefficient in Appendix 2 for completeness.
We denote its random baseline as Â. Â is close to but not exactly 5% because of integer rounding.
Contextualized Agreement Metric. However, since different datasets have different sentence length distributions and variance of attention weights caused by random seeds, it might be hard to directly interpret this agreement metric. Therefore, we contextualize this metric with model performance. We use the standard method to train a model till convergence using T steps and denote its attention weights as α; next we train the same model from scratch again using another random seed. We denote its attention weights at training step τ as α̂(τ) and its performance as p̂(τ). Roughly speaking, when τ < T , both A(α, α̂(τ)) and p̂(τ) increase as τ increases. We define the contextualized agreement ξ as:
ξ(u, v) := p̂(inf{τ |A(α, α̂(τ)) > A(u, v)}). (8)
In other words, we find the training step τ0 where its attention weights α̂(τ0) and the standard attention weights α agrees more than u and v agrees, and report the performance at this iteration. We refer to the model performance when training finishes (τ = T ) as ξ∗.
Datasets. We evaluate the agreement metrics A and ξ on multiple machine translation and text classification datasets. For machine translation, we use Multi-30k (En-De), IWSLT’14 (De-En), and News Commentary v14 (En-Nl, En-Pt, and It-Pt). For text classification, we use IMDB Sentiment Analysis, AG News Corpus, 20 Newsgroups (20 NG), Stanford Sentiment Treebank, Amazon review,
and Yelp Open Data Set. All of them are in English. The details and citations of these datasets can be seen in the Appendix A.5. We use token accuracy1 to evaluate the performance of translation models and accuracy to evaluate the classification models.
Due to space limit we round to integers and include a subset of datasets in Table 1 for the main paper. Appendix Table 4 includes the full results.
3.2 KTIW LEARNS UNDER UNIFORM ATTENTION
Even when the attention mechanism has not been learned, KTIW can still be learned. We train the same model architecture with the attention weights frozen to be uniform, and denote its lexical probe as βuf . Across all tasks, A(α, βuf) and A(βuf , βpx) 2 significantly outperform the random baseline Â, and the contextualized agreement ξ(α, βuf) is also non-trivial. This indicates that 1) the proxy we built in Section 2.3 approximates KTIW and 2) even when the attention weights are uniform, KTIW is still learned.
3.3 ATTENTION FAILS WHEN KTIW FAILS
We consider a simple task of copying from the input to the output, and each input is a permutation of the same set of 40 vocab types. Under this training distribution, the proxy model provably cannot learn: every input-output pair contains the exact same set of input-output words.3 As a result, our framework predicts that KTIW is unlikely to be learned, and hence the learning of attention is likely to fail.
The training curves of learning to copy the permutations are in Figure 2 left, colored in red: the model sometimes fails to learn. For the control experiment, if we randomly sample and permute 40 vocabs from 60 vocab types as training samples, the model successfully learns (blue curve) from this distribution every time. Therefore, even if the model is able to express this task, it might fail to learn it when KTIW is not learned. The same qualitative conclusion holds for the training distribution that mixes permutations of two disjoint sets of words (Figure 2 middle), and Appendix A.3 illustrates the intuition.
For binary classification, it follows from the model definition that attention mechanism cannot be learned if KTIW cannot be learned, since
pcorrect = σ( L∑ l=1 αlσ −1(βl)); σ(x) =
1
1 + e−x , (9)
1Appendix Tables 5, 3, and 7 include results for BLEU. 2Empirically, βpx converges to the unigram weight of a bag-of-words logistic regression model, and hence
βpx does capture an interpretable notion of “keywords”. (Appendix A.10.) 3We provide more intuitions on this in Section 4
and the model needs to attend to positions with higher β, in order to predict correctly and minimize the loss. For completeness, we include results where we freeze β and find that the learning of the attention fails in Appendix A.6.
4 CONNECTION TO IBM MODEL 1
Section 2.3 built a simple proxy model to approximate how KTIW is learned when the attention weights are uniform early on in training, and Section 3.2 verified that such an approximation is empirically sound. However, it is still hard to intuitively reason about how this proxy model learns. This section provides more intuitions by connecting its initial gradient (Section 4.1) to the classical IBM Model 1 alignment algorithm Brown et al. (1993) (Section 4.2).
4.1 DERIVATIVE AT INITIALIZATION
We continue from the end of Section 2.3. For each input word i and output word o, we are interested in understanding the probability that i assigns to o, defined as:
θpxi,o := σ(Wei)o. (10)
This quantity is directly tied to βpx, since βpxt,l = θ px xl,yt . Using super-script m to index sentence pairs in the dataset, the total loss L is:
L = − ∑ m Tm∑ t=1 log(σ( 1 Lm Lm∑ l=1 Wexml )ymt ). (11)
Suppose each ei or Wo is independently initialized from a normal distribution N (0, Id/d) and we minimize L over W and e using gradient flow, then the value of e and W are uniquely defined for each continuous time step τ . By some straightforward but tedious calculations (details in Appendix A.2), the derivative of θi,o when the training starts is:
lim d→∞ ∂θpxi,o ∂τ (τ = 0) p→ 2(Cpxi,o − 1 |O| ∑ o′∈O Cpxi,o′). (12)
where p→ means convergence in probability and Cpxi,o is defined as
Cpxi,o := ∑ m Lm∑ l=1 Tm∑ t=1 1 Lm 1[xml = i]1[y m t = o]. (13)
Equation 12 tells us that βpxt,l = θ px xl,yt is likely to be larger if Cxl,yt is large. The definition of C seems hard to interpret from Equation 13, but in the next subsection we will find that this quantity naturally corresponds to the “count table” used in the classical IBM 1 alignment learning algorithm.
4.2 IBM MODEL 1 ALIGNMENT LEARNING
The classical alignment algorithm aims to learn which input word is responsible for each output word (e.g. knowing that y2 “movie” aligns to x2 “Film” in Figure 1 upper left), from a set of input-output sentence pairs. IBM Model 1 Brown et al. (1993) starts with a 2-dimensional count table CIBM indexed by i ∈ I and o ∈ O, denoting input and output vocabs. Whenever vocab i and o co-occurs in an input-output pair, we add 1L to the C IBM i,o entry (step 1 and 2 in Figure 1 right). After updating CIBM for the entire dataset, CIBM is exactly the same as Cpx defined in Equation 13. We drop the super-script of C to keep the notation uncluttered.
Given C, the classical model estimates a probability distribution of “what output word o does the input word i translate to” (Figure 1 right step 3) as
Trans(o|i) = Ci,o∑ o′ Ci,o′ . (14)
In a pair of sequences ({xl}, {yt}), the probability βIBM that xl is translated to the output yt is:
βIBMt,l := Trans(yt|xl), (15)
and the alignment probability αIBM that “xl is responsible for outputting yt versus other xl′” is
αIBM(t, l) = βIBMt,l∑L
l′=1 β IBM t,l′
, (16)
which monotonically increases with respect to βIBMt,l . See Figure 1 right step 5.
4.3 VISUALIZING AFOREMENTIONED TASKS
Figure 1 (right) visualizes the count table C for the machine translation task, and illustrates how KTIW is learned and drives the learning of attention. We provide similar visualization for why KTIW is hard to learn under a distribution of vocab permutations (Section 3.3) in Figure 3, and how word polarity is learned in binary classification (Section 2.4) in Figure 4.
5 APPLICATION
5.1 INTERPRETABILITY IN CLASSIFICATION
We use gradient based method Ebrahimi et al. (2018) to approximate the influence ∆l for each input word xl. The column A(∆, βuf) reports the agreement between ∆ and βuf , and it significantly outperforms the random baseline. Since KTIW initially drives the attention mechanism to be learned, this explains why attention weights are correlated with word saliency on many classification tasks, even though the training objective does not explicitly reward this.
5.2 MULTI-HEAD IMPROVES TRAINING DYNAMICS
We saw in Section 3.3 that learning to copy sequences under a distribution of permutations is hard and the model can fail to learn; however, sometimes it is still able to learn. Can we improve learning and overcome this hard distribution by ensembling several attention parameters together?
We introduce a multi-head attention architecture by summing the context vector ct obtained by each head. Suppose there are K heads each indexed by k, similar to Section 2.1:
a (k) t,l = s T t W (k)hl; α (k) t,l = ea (k) t,l∑L
l′=1 e α
(k) t,l′ , (17)
and the context vector and final probability pt defined as:
c (k) t = L∑ l=1 α (k) t,l hl; pt = N([ K∑ k=1 c (k) t , dt]), (18)
where W (k) are different learn-able parameters.
We call W (k)init a good initialization if training with this single head converges, and bad otherwise. We use rejection sampling to find good/bad head initializations and combine them to form 8-head (K = 8) attention models. We experiment with 3 scenarios: (1) all head initializations are bad, (2) only one initialization is good, and (3) initializations are sampled independently at random.
Figure 2 right presents the training curves. If all head initializations are bad, the model fails to converge (red). However, as long as one of the eight initializations is good, the model can converge (blue). As the number of heads increases, the probability that all initializations are bad is exponentially small if all initializations are sampled independently; hence the model converges with very high probability (green). In this experiment, multi-head attention improves not by increasing expressiveness, since one head is sufficient to accomplish the task, but by improving the learning dynamics.
6 ASSUMPTIONS
We revisit the approximation assumptions used in our framework. Section 6.1 discusses whether the lexical probe βt,l necessarily reflects local information about input word xl, and Section 6.2 discusses whether attention weights can be freely optimized to attend to large β. These assumptions are accurate enough to predict phenomenon in Section 3 and 5, but they are not always true and hence warrant more future researches. We provide simple examples where these assumptions might fail.
6.1 β REMAINS LOCAL
We use a toy classification task to show that early on in training, expectantly, βuf is larger near positions that contain the keyword. However, unintuitively, βufL (β at the last position in the sequence) will become the largest if we train the model for too long under uniform attention weights.
In this toy task, each input is a length-40 sequence of words sampled from {1, . . . , 40} uniformly at random; a sequence is positive if and only if the keyword “1” appears in the sequence. We restrict “1”
to appear only once in each positive sequence, and use rejection sampling to balance positive and negative examples. Let l∗ be the position where xl∗ = 1.
For the positive sequences, we examine the log-odd ratio γl before the sigmoid activation in Equation 5, since β will be all close to 1 and comparing γ would be more informative: γl := log
βufl 1−βufl .
We measure four quantities: 1) γl∗ , the log-odd ratio if the model only attends to the key word position, 2) γl∗+1, one position after the key word position, 3) γ̄ := ∑L l=1 γl L , if attention weights are uniform, and 4) γL if the model attends to the last hidden state. If the γl only contains information about word xl, we should expect:
Hypothesis 1 : γl∗ ≫ γ̄ ≫ γL ≈ γl∗+1. (19)
However, if we accept the conventional wisdom that hidden states contain information about nearby words Khandelwal et al. (2018), we should expect:
Hypothesis 2 : γl∗ ≫ γl∗+1 ≫ γ̄ ≈ γL. (20)
To verify these hypotheses, we plot how γl∗ , γl∗+1, γ̄, and γL evolve as training proceeds in Figure 5. Hypothesis 2 is indeed true when training starts; however, we find the following to be true asymptotically:
Observation 3 : γL ≫ γl∗+1 ≫ γ̄ ≈ γl∗ . (21)
which is wildly different from Hypothesis 2. If we train under uniform attention weights for too long, the information about keywords can freely flow to other non-local hidden states.
6.2 ATTENTION WEIGHTS ARE FREE VARIABLES
In Section 2.1 we assumed that attention weights α behave like free variables that can assign arbitrarily high probabilities to positions with larger β. However, α is produced by a model, and sometimes learning the correct α can be challenging.
Let π be a random permutation of integers from 1 to 40, and we want to learn the function f that permutes the input with π:
f([x1, x2, . . . x40]) := [xπ(1), xπ(2) . . . xπ(40)]. (22) Input x are randomly sampled from a vocab of size 60 as in Section 3.3. Even though βuf behaves exactly the same for these two tasks, sequence copying is much easier to learn than permutation function: while the model always reaches perfect accuracy in the former setting within 300 iterations, it always fails in the latter.
LSTM has a built-in inductive bias to learn monotonic attention.
7 CONCLUSIONS
Our work tries to understand the black box of attention training. Early on in training, the LSTM attention models first learn how to translation individual words from bag of words co-occurrence statistics, which then drives the learning of the attention. Our framework explains why attention weights obtained by standard training often correlate with saliency, and how multi-head attention can increase performance by improving the training dynamics rather than expressiveness. These phenomena cannot be explained if we treated the training process as a black box.
8 ETHICAL CONSIDERATIONS
We present a new framework for understanding and predicting behaviors of an existing technology: the attention mechanism in recurrent neural networks. We do not propose any new technologies or any new datasets that could directly raise ethical questions. However, it is useful to keep in mind that our framework is far from solving the question of neural network interpretability, and should not be interpreted as ground truth in high stake domains like medicine or recidivism. We are aware and very explicit about the limitations of our framework, which we made clear in Section 6.
9 REPRODUCABILITY STATEMENT
To promote reproducibility, we provide extensive results in the appendix and describe all experiments in detail. We also attach source code for reproducing all experiments to the supplemental of this submission.
A APPENDICES
A.1 HEURISTIC THAT α ATTENDS TO LARGER β
It is a heuristic rather than a rigorous theorem that attention α is attracted to larger β. There are two reasons. First, there is a non-linear layer after the averaging the hidden states, which can interact in an arbitrarily complex way to break this heuristic. Second, even if there are no non-linear operations after hidden state aggregation, the optimal attention that minimizes the loss does not necessarily assign any probability to the position with the largest β value when there are more than two output vocabs.
Specifically, we consider the following model: pt = σ(Wc ∑ l=1 αt,lhl +Wsst) = σ( ∑ l=1 αt,lγl + γs), (23)
where Wc and Ws are learnable weights, and γ defined as:
γl := Wchl; γs := Wsst ⇒ βt,l = σ(γl + γs)yt . (24)
Consider the following scenario that outputs a probability distribution p over 3 output vocabs and γs is set to 0:
p = σ(α1γ1 + α2γ2 + α3γ3), (25)
where γl=1,2,3 ∈ R|O|=3 are the logits, α is a valid attention probability distribution, σ is the softmax, and p is the probability distribution produced by this model. Suppose
γ1 = [0, 0, 0], γ2 = [0,−10, 5], γ3 = [0, 5,−10] (26)
and the correct output is the first output vocab (i.e. the first dimension). Therefore, we take the softmax of γl and consider the first dimension:
βl=1 = 1
3 > βl=2 = βl=3 ≈ e−5. (27)
We calculate “optimal α” αopt: the optimal attention weights that can maximize the correct output word probability p0 and minimize the loss. We find that α opt 2 = α opt 3 = 0.5, while α opt 1 = 0. In this example, the optimal attention assigns 0 weight to the position l with the highest βl.
Fortunately, such pathological examples rarely occur in real datasets, and the optimal α are usually attracted to positions with higher β. We empirically verify this for the below variant of machine translation model on Multi30K.
As before, we obtain the context vector ct. Instead of concatenating ct and dt and pass it into a non-linear neural network N , we add them and apply a linear layer with softmax after it to obtain the output word probability distribution
pt = σ(W (ct + dt)). (28)
This model is desirable because we can now provably find the optimal α using gradient descent (we delay the proof to the end of this subsection). Additionally, this model has comparable performance with the variant from our main paper (Section 2.1), achieving 38.2 BLEU score, vs. 37.9 for the model in our main paper. We use αopt to denote the attention that can minimize the loss, and we find that A(αopt, β) = 0.53. β do strongly agree with αopt. Now we are left to show that we can use gradient descent to find the optimal attention weights to minimize the loss. We can rewrite pt as
pt = σ( L∑ l=1 αlWhl +Wdt). (29)
We define γl := Whl; γs := Wdt. (30)
Without loss of generality, suppose the first dimension of γ1...L, γs are all 0, and the correct token we want to maximize probability for is the first dimension, then the loss for the output word is
L = log(1 + g(α)), (31) where
g(α) := ∑
o∈O,o ̸=0
eα T γ′o+γs,o , (32)
where γ′o = [γ1,o . . . γl,o . . . γL,o] ∈ RL. (33)
Since α is defined within the convex probability simplex and g(α) is convex with respect to α, the global optima αopt can be found by gradient descent.
A.2 CALCULATING ∂θi,o∂τ
We drop the px super-script of θ to keep the notation uncluttered. We copy the loss function here to remind the readers:
L = − ∑ m Tm∑ t=1 log(σ( 1 Lm Lm∑ l=1 Wexml )ymt ). (34)
and since we optimize W and e with gradient flow,
∂W ∂τ := − L ∂W ; ∂e ∂τ := − L ∂e . (35)
We first define the un-normalized logits γ̂ and then take the softmax.
θ̂ = We, (36)
then ∂θ̂
∂τ =
∂(We) ∂τ = −W ∂e ∂τ − ∂W ∂τ e. (37)
We first analyze ϵ := −W ∂e∂τ . Since ϵ ∈ R |I|×|O|, we analyze each entry ϵi,o. Since differentiation operation and left multiplication by matrix W is linear, we analyze each individual loss term in Equation 34 and then sum them up.
We define
pm := σ( 1
Lm Lm∑ l=1 Wexml ) (38)
and Lmt := − log(pmymt ); ϵ m t,i,o := Wo ∂Lmt ∂ei . (39) Hence,
L = ∑ m Tm∑ t=1 Lmt ; ϵi,o = ∑ m Tm∑ t=1 ϵmt,i,o. (40)
Therefore,
−∂L m t
∂ei =
1
Lm Lm∑ l=1 1[xml = i](Wymt − |O|∑ o=1 pmo Wo). (41)
Hence,
ϵmt,i,ymt = −W T ymt ∂Lmt ∂ei = 1 Lm Lm∑ l=1 1[xml = i] (42)
(||Wymt || 2 2 − |O|∑ o=1 pmo W T ymt Wo),
while for o′ ̸= ymt ,
ϵmt,i,o′ = −WTo′ ∂Lmt ∂ei = 1 Lm Lm∑ l=1 1[xml = i] (43)
(WTo′Wymt − |O|∑ o=1 pmo W T o′Wo).
If Wo and ei are each sampled i.i.d. from N (0, Id/d), then by central limit theorem:
∀o ̸= o′, √ dWTo Wo′ p→ N (0, 1), (44)
∀o, i, √ dWTo ei
p→ N (0, 1), (45) and
∀o, √ d(||Wo||22 − 1)
p→ N (0, 2). (46) Therefore, when τ = 0,
lim d→∞
ϵmt,i,o p→ 1
Lm Lm∑ l=1 1[xml = i](1[y t l = o]− 1 |O| ). (47)
Summing over all the ϵmt,i,o terms, we have that
ϵi,o = Ci,o − 1 |O| ∑ o′ Ci,o′ , (48)
where C is defined as
Ci,o := ∑ m Lm∑ l=1 Tm∑ t=1 1 Lm 1[xml = i]1[y m t = o]. (49)
We find that −∂W∂τ e converges exactly to the same value. Hence
∂θ̂i,o ∂τ = ∂We ∂τ = 2(Ci,o − 1 |O| ∑ o′ Ci,o′). (50)
Since limd→∞ θ(τ = 0) p→ 1|O|1 |I|×|O|, by chain rule,
lim d→∞ ∂γi,o ∂τ (τ = 0) p→ 2(Ci,o − 1 |O| ∑ o′∈O Ci,o′). (51)
A.3 MIXTURE OF PERMUTATIONS
For this experiment, each input is either a random permutation of the set {1 . . . 40}, or a random permutation of the set {41 . . . 80}. The proxy model can easily learn whether the input words are less than 40 and decide whether the output words are all less than 40. However, βpx is still the same for every position; as a result, the attention and hence the model fail to learn. The count table C can be see in Figure 6.
A.4 ADDITIONAL TABLES FOR COMPLETENESS
We report several variants of Table 1. We chose to use token accuracy to contextualize the agreement metric in the main paper, because the errors would accumulate much more if we use a not-fully trained model to auto-regressively generate output words.
• Table 2 contains the same results as Table 1, except that its agreement score A(u, v) is now Kendall Tau rank correlation coefficient, which is a more popular metric.
• Table 4 contains the same results as Table 1, except that results are now rounded to two decimal places.
• Table 6 consists of the same results as Table 1, except that the statistics is calculated over the training set rather than the validation set.
• Table 3, Table 5, and Table 7 contain the translation results from the above 3 mentioned tables respectively, except that p̂ is defined as BLEU score rather than token accuracy, and hence the contextualized metric interpretation ξ changes correspondingly.
A.5 DATASET DESCRIPTION
We summarize the datasets that we use for classification and machine translation. See Table 8 for details on train/test splits and median sequence lengths for each dataset.
IMDB Sentiment Analysis Maas et al. (2011) A sentiment analysis data set with 50,000 (25,000 train and 25,000 test) IMDB movie reviews and their corresponding positive or negative sentiment.
AG News Corpus Zhang et al. (2015) 120,000 news articles and their corresponding topic (world, sports, business, or science/tech). We classify between the world and business articles.
20 Newsgroups 4 A news data set containing around 18,000 newsgroups articles split between 20 different labeled categories. We classify between baseball and hocky articles.
Stanford Sentiment Treebank Socher et al. (2013) A data set for classifying the sentiment of movie reviews, labeled on a scale from 1 (negative) to 5 (positive). We remove all movies labeled as 3, and classify between 4 or 5 and 1 or 2.
Multi Domain Sentiment Data set 5 Approximately 40,000 Amazon reviews from various product categories labeled with a corresponding positive or negative label. Since some of the sequences are particularly long, we only use sequences of length less than 400 words.
Yelp Open Data Set 6 20,000 Yelp reviews and their corresponding star rating from 1 to 5. We classify between reviews with rating ≤ 2 and ≥ 4. Multi-30k Elliott et al. (2016) English to German translation. The data is from translation image captions.
IWSLT’14 Cettolo et al. (2015) German to English translation. The data is from translated TED talk transcriptions.
News Commentary v14 Cettolo et al. (2015) A collection of translation news commentary datasets in different languages from WMT19 7. We use the following translation splits: English-Dutch (En-Nl), English-Portuguese (En-Pt), and Italian-Portuguese (It-Pt). In pre-processing for this dataset, we removed all purely numerical examples.
A.6 α FAILS WHEN β IS FROZEN
For each classification task we initialize a random model and freeze all parameters except for the attention layer (frozen β model). We then compute the correlation between this trained attention (defined as αfr) and the normal attention α. Table 9 reports this correlation at the iteration where αfr is most correlated with α on the validation set. As shown in Table 9, the left column is consistently lower than the right column. This indicates that the model can learn output relevance without attention, but not vice versa.
A.7 TRAINING βuf
We find that A(α, βuf(τ)) first increases and then decreases as training proceeds (i.e. τ increases), so we chose the maximum agreement to report in Table 1 over the course of training. Since this trend is consistent across all datasets, our choice minimally inflates the agreement measure, and is comparable to the practice of reporting dev set results. As discussed in Section 6.1, training under uniform attention for too long might bring unintuitive results,
A.8 MODEL AND TRAINING DETAILS
Classification Our model uses dimension 300 GloVe-6B pre-trained embeddings to initialize the token embeddings where they aligned with our vocabulary. The sequences are encoded with a 1 layer bidirectional LSTM of dimension 256. The rest of the model, including the attention mechanism, is exactly as described in 2.4. Our model has 1,274,882 parameters excluding embeddings. Since each classification set has a different vocab size each model has a slightly different parameter count when considering embeddings: 19,376,282 for IMDB, 10,594,382 for AG News, 5,021,282 for 20
4http://qwone.com/ jason/20Newsgroups/ 5https://www.cs.jhu.edu/ mdredze/datasets/sentiment/ 6https://www.yelp.com/dataset 7http://www.statmt.org/wmt19/translation-task.html
Newsgroups, 4,581,482 for SST, 13,685,282 for Yelp, 12,407,882 for Amazon, and 2,682,182 for SMS.
Translation We use a a bidirectional two layer bi-LSTM of dimension 256 to encode the source and the use last hidden state hL as the first hidden state of the decoder. The attention and outputs are then calculated as described in 2. The learn-able neural network before the outputs that is mentioned in Section 2, is a 1 hidden layer model with ReLU non-linearity. The hidden layer is dimension 256. Our model contains 6,132,544 parameters excluding embeddings and 8,180,544 including embeddings on all datasets.
Permutation Copying We use single directional single layer LSTM with hidden dimension 256 for both the encoder and the decoder.
Classification Procedure For all classification datasets we used a batch size of 32. We trained for 4000 iterations on each dataset. For each dataset we train on the pre-defined training set if the dataset has one. Additionally, if a dataset had a predefined test set, we randomly sample at most 4000 examples from this test set for validation. Specific dataset split sizes are given in Table 8.
Classification Evaluation We evaluated each model at steps 0, 10, 50, 100, 150, 200, 250, and then every 250 iterations after that.
Classification Tokenization We tokenized the data at the word level. We mapped all words occurring less than 3 times in the training set to <unk>. For 20 Newsgroups and AG News we mapped all non-single digit integer ”words” to <unk>. For 20 Newsgroups we also split words with the ” ” character.
Classification Training We trained all classification models on a single GPU. Some datasets took slightly longer to train than others (largely depending on average sequence length), but each train took at most 45 minutes.
Translation Hyper Parameters For translation all hidden states in the model are dimension 256. We use the sequence to sequence architecture described above. The LSTMs used dropout 0.5.
Translation Procedure For all translation tasks we used batch size 16 when training. For IWSLT’14 and Multi-30k we used the provided dataset splits. For the News Commentary v14 datasets we did a 90-10 split of the data for training and validation respectively.
Translation Evaluation We evaluated each model at steps 0, 50, 100, 500, 1000, 1500, and then every 2000 iterations after that.
Translation Training We trained all translation models on a single GPU. IWSLT’14, and the News Commentary datasets took approximately 5-6 hours to train, and multi-30k took closer to 1 hour to train.
Translation Tokenization We tokenized both translation datasets using the Sentence-Piece tokenizer trained on the corresponding train set to a vocab size of 8,000. We used a single tokenization for source and target tokens. And accordingly also used the same matrix of embeddings for target and source sequences.
A.9 A NOTE ON SMS DATASET
In addition to the classification datasets reported in the tables, we also ran experiments on the SMS Spam Collection V.1 dataset 8. The attention learned from this dataset was very high variance, and so two different random seeds would consistently produce attentions that did not correlate much. The dataset itself was also a bit of an outlier; it had shorter sequence lengths than any of the other datasets (median sequence length 13 on train and validation set), it also had the smallest training set out of all our datasets (3500 examples), and it had by far the smallest vocab (4691 unique tokens). We decided not to include this dataset in the main paper due to these unusual results and leave further exploration to future works.
A.10 LOGISTIC REGRESSION PROXY MODEL
Our proxy model can be shown to be equivalent to a bag-of-words logistic regression model in the classification case. Specifically, we define a bag-of-words logistic regression model to be:
∀t, pt = σ(βlogx). (52)
where x ∈ R|I|, βlog ∈ R|O|×|I|, and σ is the softmax function. The entries in x are the number of times each word occurs in the input sequence, normalized by the sequence length. and βlog is learned. This is equivalent to:
8http://www.dt.fee.unicamp.br/ tiago/smsspamcollection/
∀t, pt = σ( 1
L L∑ l=1 βlogxl ). (53)
Here βlogi indicates the ith column of β log; these are the entries in βlog corresponding to predictions for the ith word in the vocab. Now it is easy to arrive at the equivalence between logistic regression and our proxy model. If we restrict the rank of βlog to be at most min(d, |O|, |I|) by factoring it as βlog = WE where W ∈ R|O|×d and E ∈ Rd×|I|, then the logistic regression looks like:
∀t, pt = σ( 1
L L∑ l=1 WExl), (54)
which is equivalent to our proxy model:
∀t, pt = σ( 1
L L∑ l=1 Wexl). (55)
Since d = 256 for the proxy model, which is larger than |O| = 2 in the classification case, the proxy model is not rank limited and is hence fully equivalent to the logistic regression model. Therefore the βpx can be interpreted as ”keywords” in the same way that the logistic regression weights can.
To empirically verify this equivalence, we trained a logistic regression model with ℓ2 regularization on each of our classification datasets. To pick the optimal regularization level, we did a sweep of regularization coefficients across ten orders of magnitude and picked the one with the best validation accuracy. We report results for A(βuf , βlog) in comparison to A(βuf , βpx) in Table 10 9. Note that these numbers are similar but not exactly equivalent. The reason is that the proxy model did not use ℓ2 regularization, while logistic regression did.
9These numbers were obtained from a retrain of all the models in the main table, so for instance, the LSTM model used to produce βuf might not be exactly the same as the one used for the results in all the other tables due to random seed difference. | 1. What is the main contribution of the paper regarding attention weights learning in machine translation?
2. What are the strengths and weaknesses of the proposed approach, particularly in understanding KTIW and its relation to attention learning?
3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
4. Do you have any questions or concerns regarding the paper's experiments and their contributions to the overall logic of the paper?
5. Can you explain the implications of the results in this paper, and how defining this proxy for machine translation is useful for interpretability and understanding machine translation learning dynamics? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper investigates how seq2seq attention weights are learned in a machine translation setting. The paper defines a proxy to measure a model’s ability to translate individual words, which they shorten to KTIW. The paper claims that this measurable quantity is a driver for the model to learn to attend well. The evidence presented in favour of this hypothesis is that KTIW can be learned (it is similar to learned attention weights) when attention weights are frozen and uniform, but that when KTIW is not learned, neither are the attention weights. This suggests a sort of necessary condition on KTIW for attention learning, which the paper claims is a causal driver for attention learning.
The paper suggests that it is therefore possible to reduce the problem of understanding attention learning to the problem of understanding KTIW (which does not seem to be entirely true, as KTIW may be necessary but not sufficient condition for learning of attention weights). The paper thus proposes a proxy model for approximating KTIW learning (that is, a proxy for the proxy for attention weight learning), and verifies this. The paper claims that attention weights are learned in two stages: first KTIW is learned through word co-occurence statistics, and second, a learned KTIW drives the learning of attention.
Strengths And Weaknesses
Strengths:
Aiming for a theoretical understanding of learning dynamics and attention is an admirable goal. The high level topic would be of interest to many in the community.
Weaknesses:
The major weakness of this work is that I found the text and presentation of results really difficult to parse. It is also difficult to establish what exactly each experiment contributes to the logic of the paper, why they were designed in that way, or the takeaways from each expreriment, which made it quite a difficult read.
What are the implications of the results in this paper? I think I am missing the major takeaway from this work. It is not clear to me how defining this proxy for machine translation is useful for interpretability and understanding machine translation learning dynamics.
The KTIW proxy seems to assume a 1:1 ratio of input to output words for translation, which surely cannot hold true in the general MT case? Perhaps I am missing something here.
If I understand correctly, the logic is section 5.3 seems to be circular. The paper defines the initialisation of an attention head as ‘good’ if training with that single head converges (on a task that requires only a single attention head). The paper then constructs multi-head attention scenarios with all bad initialisations, all random, or only-one-good head initialisation. They present the result that if all initialisations are bad, they model does not learn, but if at least on is ‘good’ then the model will converge. As the task is learnable with a single attention head, this logic seems circular and trivially true. It is not clear to me what it adds to the paper and seems off-topic.
Clarity, Quality, Novelty And Reproducibility
The text is quite hard to parse, and the introduction doesn’t do a great job at justifying the work. The paper would benefit from some work to improve the logical flow in the presentation of ideas and arguments.
The title suggests that the paper is broadly about understanding attention mechanisms but really the paper focuses on machine translation and seq2seq models. It would be helpful for the reader if this information was made explicit in the title and abstract. |
ICLR | Title
Approximating How Single Head Attention Learns
Abstract
Why do models often attend to salient words, and how does this evolve throughout training? We approximate model training as a two stage process: early on in training when the attention weights are uniform, the model learns to translate individual input word i to o if they co-occur frequently. Later, the model learns to attend to i while the correct output is o because it knows i translates to o. To formalize, we define a model property, Knowledge to Translate Individual Words (KTIW) (e.g. knowing that i translates to o), and claim that it drives the learning of the attention. This claim is supported by the fact that before the attention mechanism is learned, KTIW can be learned from word co-occurrence statistics, but not the other way around. Particularly, we can construct a training distribution that makes KTIW hard to learn, the learning of the attention fails, and the model cannot even learn the simple task of copying the input words to the output. Our approximation explains why models sometimes attend to salient words, and inspires a toy example where a multi-head attention model can overcome the above hard training distribution by improving learning dynamics rather than expressiveness. We end by discussing the limitation of our approximation framework and suggest future directions.
1 INTRODUCTION
The attention mechanism underlies many recent advances in natural language processing, such as machine translation Bahdanau et al. (2015) and pretraining Devlin et al. (2019). While many works focus on analyzing attention in already-trained models Jain & Wallace (2019); Vashishth et al. (2019); Brunner et al. (2019); Elhage et al. (2021); Olsson et al. (2022), little is understood about how the attention mechanism is learned via gradient descent at training time.
These learning dynamics are important, as standard, gradient-trained models can have very unique inductive biases, distinguishing them from more esoteric but equally accurate models. For example, in text classification, while standard models typically attend to salient (high gradient influence) words Serrano & Smith (2019), recent work constructs accurate models that attend to irrelevant words instead Wiegreffe & Pinter (2019); Pruthi et al. (2020). In machine translation, while the standard gradient descent cannot train a high-accuracy transformer with relatively few attention heads, we can construct one by first training with more heads and then pruning the redundant heads Voita et al. (2019); Michel et al. (2019). To explain these differences, we need to understand how attention is learned at training time.
Our work opens the black box of attention training, focusing on attention in LSTM Seq2Seq models Luong et al. (2015) (Section 2.1). Intuitively, if the model knows that the input individual word i translates to the correct output word o, it should attend to i to minimize the loss. This motivates us to investigate the model’s knowledge to translate individual words (abbreviated as KTIW), and we define a lexical probe β to measure this property.
We claim that KTIW drives the attention mechanism to be learned. This is supported by the fact that KTIW can be learned when the attention mechanism has not been learned (Section 3.2), but not the other way around (Section 3.3). Specifically, even when the attention weights are frozen to be uniform, probe β still strongly agrees with the attention weights of a standardly trained model. On the other hand, when KTIW cannot be learned, the attention mechanism cannot be learned. Particularly, we can construct a distribution where KTIW is hard to learn; as a result, the model fails to learn a simple task of copying the input to the output.
Now the problem of understanding how attention mechanism is learned reduces to understanding how KTIW is learned. Section 2.3 builds a simpler proxy model that approximates how KTIW is learned, and Section 3.2 verifies empirically that the approximation is reasonable. This proxy model is simple enough to analyze and we interpret its training dynamics with the classical IBM Translation Model 1 (Section 4.2), which translates individual word i to o if they co-occur more frequently.
To collapse this chain of reasoning, we approximate model training in two stages. Early on in training when the attention mechanism has not been learned, the model learns KTIW through word co-occurrence statistics; KTIW later drives the learning of the attention.
Using these insights, we explain why attention weights sometimes correlate with word saliency in binary text classification (Section 5.1): the model first learns to “translate” salient words into labels, and then attend to them. We also present a toy experiment (Section 5.2) where multi-head attention improves learning dynamics by combining differently initialized attention heads, even though a single head model can express the target function.
Nevertheless, “all models are wrong”. Even though our framework successfully explains and predicts the above empirical phenomena, it cannot fully explain the behavior of attention-based models, since approximations are after all less accurate. Section 6 identifies and discusses two key assumptions: (1) information of a word tends to stay in the local hidden state (Section 6.1) and (2) attention weights are free variables (Section 6.2). We discuss future directions in Section 7.
2 MODEL
Section 2.1 defines the LSTM with attention Seq2Seq architecture. Section 2.2 defines the lexical probe β, which measures the model’s knowledge to translate individual words (KTIW). Section 2.3 approximates how KTIW is learned early on in training by building a “bag of words” proxy model. Section 2.4 shows that our framework generalizes to binary classification.
2.1 MACHINE TRANSLATION MODEL
We use the dot-attention variant from Luong et al. (2015). The model maps from an input sequence {xl} with length L to an output sequence {yt} with length T . We first use LSTM encoders to embed {xl} ⊂ I and {yt} ⊂ O respectively, where I and O are input and output vocab space, and obtain encoder and decoder hidden states {hl} and {st}. Then we calculate the attention logits at,l by applying a learnable mapping from hl and st, and use softmax to obtain the attention weights αt,l:
at,l = s T t Whl; αt,l = eat,l∑L l′=1 e at,l′ . (1)
Next we sum the encoder hidden states {ht} weighted by the attention to obtain the “context vector” ct, concatenate it with the decoder st, and obtain the output vocab probabilities pt by applying a learnable neural network N with one hidden layer and softmax activation at the output, and train the model by minimizing the sum of negative log-likelihood of all the output words yt.
ct = L∑ l=1 αt,lhl; pt = N([ct, st]); L = − T∑ t=1 log pt,yt . (2)
2.2 LEXICAL PROBE β
We define the lexical probe βt,l as: βt,l := N([hl, st])yt , (3)
which means “the probability assigned to the correct word yt, if the network attends only to the input encoder state hl”. If we assume that hl only contains information about xl, β closely reflects KTIW, since β can be interpreted as “the probability that xl is translated to the output yt”.
Heuristically, to minimize the loss, the attention weights α should be attracted to positions with larger βt,l. Hence, we expect the learning of the attention to be driven by KTIW (Figure 1 left). We then discuss how KTIW is learned.
2.3 EARLY DYNAMICS OF LEXICAL KNOWLEDGE
To approximate how KTIW is learned early on in training, we build a proxy model by making a few simplifying assumptions. First, since attention weights are uniform early on in training, we replace the attention distribution with a uniform one. Second, since we are defining individual word translation, we assume that information about each word is localized to its corresponding hidden state. Therefore, similar to Sun & Lu (2020), we replace hl with an input word embedding exl ∈ Rd, where e represents the word embedding matrix and d is the embedding dimension. Third, to simplify analysis, we assume N only contains one linear layer W ∈ R|O|×d before softmax activation and ignore the decoder state st. Putting these assumptions together, we now define a new proxy model that produces output vocab probability pt := σ( 1L ∑L l=1 Wexl).
On a high level, this proxy averages the embeddings of the input “bag of words”, and produces a distribution over output vocabs to predict the output “bag of words”. This implies that the sets of input and output words for each sentence pair are sufficient statistics for this proxy. The probe βpx can be similarly defined as βpxt,l := σ(Wexl)yt .
We provide more intuitions on how this proxy learns in Section 4.
2.4 BINARY CLASSIFICATION MODEL
Binary classification can be reduced to “machine translation”, where T = 1 and |O| = 2. We drop the subscript t = 1 when discussing classification.
We use the standard architecture from Wiegreffe & Pinter (2019). After obtaining the encoder hidden states {ht}, we calculate the attention logits al by applying a feed-forward neural network with one hidden layer and take the softmax of a to obtain the attention weights α:
al = v T (ReLU(Qhl)); αl = eal∑L l′=1 e al′ , (4)
where Q and v are learnable.
We sum the hidden states {hl} weighted by the attention, feed it to a final linear layer and apply the sigmoid activation function (σ) to obtain the probability for the positive class
ppos = σ(WT L∑
l=1
alhl) = σ( L∑ l=1 αlW Thl). (5)
Similar to the machine translation model (Section 2.1), we define the “lexical probe”:
βl := σ((2y − 1)WThl), (6)
where y ∈ {0, 1} is the label and 2y − 1 ∈ {−1, 1} controls the sign. On a high level, Sun & Lu (2020) focuses on binary classification and provides almost the exact same arguments as ours. Specifically, their polarity score “sl” equals βl1−βl in our context, and they provide a more subtle analysis of how the attention mechanism is learned in binary classification.
3 EMPIRICAL EVIDENCE
We provide evidence that KTIW drives the learning of the attention early on in training: KTIW can be learned when the attention mechanism has not been learned (Section 3.2), but not the other way around (Section 3.3).
3.1 MEASURING AGREEMENT
We start by describing how to evaluate the agreement between quantities of interest, such as α and β. For any input-output sentence pair (xm, ym), for each output index t, αmt , β m t , β px,m t ∈ RL m all associate each input position l with a real number. Since attention weights and word alignment tend to be sparse, we focus on the agreement of the highest-valued position. Suppose u, v ∈ RL, we formally define the agreement of v with u as:
A(u, v) := 1[|{j|vj > vargmaxui}| < 5%L], (7)
which means “whether the highest-valued position (dimension) in u is in the top 5% highest-valued positions in v”. We average the A values across all output words on the validation set to measure the agreement between two model properties. We also report Kendall’s τ rank correlation coefficient in Appendix 2 for completeness.
We denote its random baseline as Â. Â is close to but not exactly 5% because of integer rounding.
Contextualized Agreement Metric. However, since different datasets have different sentence length distributions and variance of attention weights caused by random seeds, it might be hard to directly interpret this agreement metric. Therefore, we contextualize this metric with model performance. We use the standard method to train a model till convergence using T steps and denote its attention weights as α; next we train the same model from scratch again using another random seed. We denote its attention weights at training step τ as α̂(τ) and its performance as p̂(τ). Roughly speaking, when τ < T , both A(α, α̂(τ)) and p̂(τ) increase as τ increases. We define the contextualized agreement ξ as:
ξ(u, v) := p̂(inf{τ |A(α, α̂(τ)) > A(u, v)}). (8)
In other words, we find the training step τ0 where its attention weights α̂(τ0) and the standard attention weights α agrees more than u and v agrees, and report the performance at this iteration. We refer to the model performance when training finishes (τ = T ) as ξ∗.
Datasets. We evaluate the agreement metrics A and ξ on multiple machine translation and text classification datasets. For machine translation, we use Multi-30k (En-De), IWSLT’14 (De-En), and News Commentary v14 (En-Nl, En-Pt, and It-Pt). For text classification, we use IMDB Sentiment Analysis, AG News Corpus, 20 Newsgroups (20 NG), Stanford Sentiment Treebank, Amazon review,
and Yelp Open Data Set. All of them are in English. The details and citations of these datasets can be seen in the Appendix A.5. We use token accuracy1 to evaluate the performance of translation models and accuracy to evaluate the classification models.
Due to space limit we round to integers and include a subset of datasets in Table 1 for the main paper. Appendix Table 4 includes the full results.
3.2 KTIW LEARNS UNDER UNIFORM ATTENTION
Even when the attention mechanism has not been learned, KTIW can still be learned. We train the same model architecture with the attention weights frozen to be uniform, and denote its lexical probe as βuf . Across all tasks, A(α, βuf) and A(βuf , βpx) 2 significantly outperform the random baseline Â, and the contextualized agreement ξ(α, βuf) is also non-trivial. This indicates that 1) the proxy we built in Section 2.3 approximates KTIW and 2) even when the attention weights are uniform, KTIW is still learned.
3.3 ATTENTION FAILS WHEN KTIW FAILS
We consider a simple task of copying from the input to the output, and each input is a permutation of the same set of 40 vocab types. Under this training distribution, the proxy model provably cannot learn: every input-output pair contains the exact same set of input-output words.3 As a result, our framework predicts that KTIW is unlikely to be learned, and hence the learning of attention is likely to fail.
The training curves of learning to copy the permutations are in Figure 2 left, colored in red: the model sometimes fails to learn. For the control experiment, if we randomly sample and permute 40 vocabs from 60 vocab types as training samples, the model successfully learns (blue curve) from this distribution every time. Therefore, even if the model is able to express this task, it might fail to learn it when KTIW is not learned. The same qualitative conclusion holds for the training distribution that mixes permutations of two disjoint sets of words (Figure 2 middle), and Appendix A.3 illustrates the intuition.
For binary classification, it follows from the model definition that attention mechanism cannot be learned if KTIW cannot be learned, since
pcorrect = σ( L∑ l=1 αlσ −1(βl)); σ(x) =
1
1 + e−x , (9)
1Appendix Tables 5, 3, and 7 include results for BLEU. 2Empirically, βpx converges to the unigram weight of a bag-of-words logistic regression model, and hence
βpx does capture an interpretable notion of “keywords”. (Appendix A.10.) 3We provide more intuitions on this in Section 4
and the model needs to attend to positions with higher β, in order to predict correctly and minimize the loss. For completeness, we include results where we freeze β and find that the learning of the attention fails in Appendix A.6.
4 CONNECTION TO IBM MODEL 1
Section 2.3 built a simple proxy model to approximate how KTIW is learned when the attention weights are uniform early on in training, and Section 3.2 verified that such an approximation is empirically sound. However, it is still hard to intuitively reason about how this proxy model learns. This section provides more intuitions by connecting its initial gradient (Section 4.1) to the classical IBM Model 1 alignment algorithm Brown et al. (1993) (Section 4.2).
4.1 DERIVATIVE AT INITIALIZATION
We continue from the end of Section 2.3. For each input word i and output word o, we are interested in understanding the probability that i assigns to o, defined as:
θpxi,o := σ(Wei)o. (10)
This quantity is directly tied to βpx, since βpxt,l = θ px xl,yt . Using super-script m to index sentence pairs in the dataset, the total loss L is:
L = − ∑ m Tm∑ t=1 log(σ( 1 Lm Lm∑ l=1 Wexml )ymt ). (11)
Suppose each ei or Wo is independently initialized from a normal distribution N (0, Id/d) and we minimize L over W and e using gradient flow, then the value of e and W are uniquely defined for each continuous time step τ . By some straightforward but tedious calculations (details in Appendix A.2), the derivative of θi,o when the training starts is:
lim d→∞ ∂θpxi,o ∂τ (τ = 0) p→ 2(Cpxi,o − 1 |O| ∑ o′∈O Cpxi,o′). (12)
where p→ means convergence in probability and Cpxi,o is defined as
Cpxi,o := ∑ m Lm∑ l=1 Tm∑ t=1 1 Lm 1[xml = i]1[y m t = o]. (13)
Equation 12 tells us that βpxt,l = θ px xl,yt is likely to be larger if Cxl,yt is large. The definition of C seems hard to interpret from Equation 13, but in the next subsection we will find that this quantity naturally corresponds to the “count table” used in the classical IBM 1 alignment learning algorithm.
4.2 IBM MODEL 1 ALIGNMENT LEARNING
The classical alignment algorithm aims to learn which input word is responsible for each output word (e.g. knowing that y2 “movie” aligns to x2 “Film” in Figure 1 upper left), from a set of input-output sentence pairs. IBM Model 1 Brown et al. (1993) starts with a 2-dimensional count table CIBM indexed by i ∈ I and o ∈ O, denoting input and output vocabs. Whenever vocab i and o co-occurs in an input-output pair, we add 1L to the C IBM i,o entry (step 1 and 2 in Figure 1 right). After updating CIBM for the entire dataset, CIBM is exactly the same as Cpx defined in Equation 13. We drop the super-script of C to keep the notation uncluttered.
Given C, the classical model estimates a probability distribution of “what output word o does the input word i translate to” (Figure 1 right step 3) as
Trans(o|i) = Ci,o∑ o′ Ci,o′ . (14)
In a pair of sequences ({xl}, {yt}), the probability βIBM that xl is translated to the output yt is:
βIBMt,l := Trans(yt|xl), (15)
and the alignment probability αIBM that “xl is responsible for outputting yt versus other xl′” is
αIBM(t, l) = βIBMt,l∑L
l′=1 β IBM t,l′
, (16)
which monotonically increases with respect to βIBMt,l . See Figure 1 right step 5.
4.3 VISUALIZING AFOREMENTIONED TASKS
Figure 1 (right) visualizes the count table C for the machine translation task, and illustrates how KTIW is learned and drives the learning of attention. We provide similar visualization for why KTIW is hard to learn under a distribution of vocab permutations (Section 3.3) in Figure 3, and how word polarity is learned in binary classification (Section 2.4) in Figure 4.
5 APPLICATION
5.1 INTERPRETABILITY IN CLASSIFICATION
We use gradient based method Ebrahimi et al. (2018) to approximate the influence ∆l for each input word xl. The column A(∆, βuf) reports the agreement between ∆ and βuf , and it significantly outperforms the random baseline. Since KTIW initially drives the attention mechanism to be learned, this explains why attention weights are correlated with word saliency on many classification tasks, even though the training objective does not explicitly reward this.
5.2 MULTI-HEAD IMPROVES TRAINING DYNAMICS
We saw in Section 3.3 that learning to copy sequences under a distribution of permutations is hard and the model can fail to learn; however, sometimes it is still able to learn. Can we improve learning and overcome this hard distribution by ensembling several attention parameters together?
We introduce a multi-head attention architecture by summing the context vector ct obtained by each head. Suppose there are K heads each indexed by k, similar to Section 2.1:
a (k) t,l = s T t W (k)hl; α (k) t,l = ea (k) t,l∑L
l′=1 e α
(k) t,l′ , (17)
and the context vector and final probability pt defined as:
c (k) t = L∑ l=1 α (k) t,l hl; pt = N([ K∑ k=1 c (k) t , dt]), (18)
where W (k) are different learn-able parameters.
We call W (k)init a good initialization if training with this single head converges, and bad otherwise. We use rejection sampling to find good/bad head initializations and combine them to form 8-head (K = 8) attention models. We experiment with 3 scenarios: (1) all head initializations are bad, (2) only one initialization is good, and (3) initializations are sampled independently at random.
Figure 2 right presents the training curves. If all head initializations are bad, the model fails to converge (red). However, as long as one of the eight initializations is good, the model can converge (blue). As the number of heads increases, the probability that all initializations are bad is exponentially small if all initializations are sampled independently; hence the model converges with very high probability (green). In this experiment, multi-head attention improves not by increasing expressiveness, since one head is sufficient to accomplish the task, but by improving the learning dynamics.
6 ASSUMPTIONS
We revisit the approximation assumptions used in our framework. Section 6.1 discusses whether the lexical probe βt,l necessarily reflects local information about input word xl, and Section 6.2 discusses whether attention weights can be freely optimized to attend to large β. These assumptions are accurate enough to predict phenomenon in Section 3 and 5, but they are not always true and hence warrant more future researches. We provide simple examples where these assumptions might fail.
6.1 β REMAINS LOCAL
We use a toy classification task to show that early on in training, expectantly, βuf is larger near positions that contain the keyword. However, unintuitively, βufL (β at the last position in the sequence) will become the largest if we train the model for too long under uniform attention weights.
In this toy task, each input is a length-40 sequence of words sampled from {1, . . . , 40} uniformly at random; a sequence is positive if and only if the keyword “1” appears in the sequence. We restrict “1”
to appear only once in each positive sequence, and use rejection sampling to balance positive and negative examples. Let l∗ be the position where xl∗ = 1.
For the positive sequences, we examine the log-odd ratio γl before the sigmoid activation in Equation 5, since β will be all close to 1 and comparing γ would be more informative: γl := log
βufl 1−βufl .
We measure four quantities: 1) γl∗ , the log-odd ratio if the model only attends to the key word position, 2) γl∗+1, one position after the key word position, 3) γ̄ := ∑L l=1 γl L , if attention weights are uniform, and 4) γL if the model attends to the last hidden state. If the γl only contains information about word xl, we should expect:
Hypothesis 1 : γl∗ ≫ γ̄ ≫ γL ≈ γl∗+1. (19)
However, if we accept the conventional wisdom that hidden states contain information about nearby words Khandelwal et al. (2018), we should expect:
Hypothesis 2 : γl∗ ≫ γl∗+1 ≫ γ̄ ≈ γL. (20)
To verify these hypotheses, we plot how γl∗ , γl∗+1, γ̄, and γL evolve as training proceeds in Figure 5. Hypothesis 2 is indeed true when training starts; however, we find the following to be true asymptotically:
Observation 3 : γL ≫ γl∗+1 ≫ γ̄ ≈ γl∗ . (21)
which is wildly different from Hypothesis 2. If we train under uniform attention weights for too long, the information about keywords can freely flow to other non-local hidden states.
6.2 ATTENTION WEIGHTS ARE FREE VARIABLES
In Section 2.1 we assumed that attention weights α behave like free variables that can assign arbitrarily high probabilities to positions with larger β. However, α is produced by a model, and sometimes learning the correct α can be challenging.
Let π be a random permutation of integers from 1 to 40, and we want to learn the function f that permutes the input with π:
f([x1, x2, . . . x40]) := [xπ(1), xπ(2) . . . xπ(40)]. (22) Input x are randomly sampled from a vocab of size 60 as in Section 3.3. Even though βuf behaves exactly the same for these two tasks, sequence copying is much easier to learn than permutation function: while the model always reaches perfect accuracy in the former setting within 300 iterations, it always fails in the latter.
LSTM has a built-in inductive bias to learn monotonic attention.
7 CONCLUSIONS
Our work tries to understand the black box of attention training. Early on in training, the LSTM attention models first learn how to translation individual words from bag of words co-occurrence statistics, which then drives the learning of the attention. Our framework explains why attention weights obtained by standard training often correlate with saliency, and how multi-head attention can increase performance by improving the training dynamics rather than expressiveness. These phenomena cannot be explained if we treated the training process as a black box.
8 ETHICAL CONSIDERATIONS
We present a new framework for understanding and predicting behaviors of an existing technology: the attention mechanism in recurrent neural networks. We do not propose any new technologies or any new datasets that could directly raise ethical questions. However, it is useful to keep in mind that our framework is far from solving the question of neural network interpretability, and should not be interpreted as ground truth in high stake domains like medicine or recidivism. We are aware and very explicit about the limitations of our framework, which we made clear in Section 6.
9 REPRODUCABILITY STATEMENT
To promote reproducibility, we provide extensive results in the appendix and describe all experiments in detail. We also attach source code for reproducing all experiments to the supplemental of this submission.
A APPENDICES
A.1 HEURISTIC THAT α ATTENDS TO LARGER β
It is a heuristic rather than a rigorous theorem that attention α is attracted to larger β. There are two reasons. First, there is a non-linear layer after the averaging the hidden states, which can interact in an arbitrarily complex way to break this heuristic. Second, even if there are no non-linear operations after hidden state aggregation, the optimal attention that minimizes the loss does not necessarily assign any probability to the position with the largest β value when there are more than two output vocabs.
Specifically, we consider the following model: pt = σ(Wc ∑ l=1 αt,lhl +Wsst) = σ( ∑ l=1 αt,lγl + γs), (23)
where Wc and Ws are learnable weights, and γ defined as:
γl := Wchl; γs := Wsst ⇒ βt,l = σ(γl + γs)yt . (24)
Consider the following scenario that outputs a probability distribution p over 3 output vocabs and γs is set to 0:
p = σ(α1γ1 + α2γ2 + α3γ3), (25)
where γl=1,2,3 ∈ R|O|=3 are the logits, α is a valid attention probability distribution, σ is the softmax, and p is the probability distribution produced by this model. Suppose
γ1 = [0, 0, 0], γ2 = [0,−10, 5], γ3 = [0, 5,−10] (26)
and the correct output is the first output vocab (i.e. the first dimension). Therefore, we take the softmax of γl and consider the first dimension:
βl=1 = 1
3 > βl=2 = βl=3 ≈ e−5. (27)
We calculate “optimal α” αopt: the optimal attention weights that can maximize the correct output word probability p0 and minimize the loss. We find that α opt 2 = α opt 3 = 0.5, while α opt 1 = 0. In this example, the optimal attention assigns 0 weight to the position l with the highest βl.
Fortunately, such pathological examples rarely occur in real datasets, and the optimal α are usually attracted to positions with higher β. We empirically verify this for the below variant of machine translation model on Multi30K.
As before, we obtain the context vector ct. Instead of concatenating ct and dt and pass it into a non-linear neural network N , we add them and apply a linear layer with softmax after it to obtain the output word probability distribution
pt = σ(W (ct + dt)). (28)
This model is desirable because we can now provably find the optimal α using gradient descent (we delay the proof to the end of this subsection). Additionally, this model has comparable performance with the variant from our main paper (Section 2.1), achieving 38.2 BLEU score, vs. 37.9 for the model in our main paper. We use αopt to denote the attention that can minimize the loss, and we find that A(αopt, β) = 0.53. β do strongly agree with αopt. Now we are left to show that we can use gradient descent to find the optimal attention weights to minimize the loss. We can rewrite pt as
pt = σ( L∑ l=1 αlWhl +Wdt). (29)
We define γl := Whl; γs := Wdt. (30)
Without loss of generality, suppose the first dimension of γ1...L, γs are all 0, and the correct token we want to maximize probability for is the first dimension, then the loss for the output word is
L = log(1 + g(α)), (31) where
g(α) := ∑
o∈O,o ̸=0
eα T γ′o+γs,o , (32)
where γ′o = [γ1,o . . . γl,o . . . γL,o] ∈ RL. (33)
Since α is defined within the convex probability simplex and g(α) is convex with respect to α, the global optima αopt can be found by gradient descent.
A.2 CALCULATING ∂θi,o∂τ
We drop the px super-script of θ to keep the notation uncluttered. We copy the loss function here to remind the readers:
L = − ∑ m Tm∑ t=1 log(σ( 1 Lm Lm∑ l=1 Wexml )ymt ). (34)
and since we optimize W and e with gradient flow,
∂W ∂τ := − L ∂W ; ∂e ∂τ := − L ∂e . (35)
We first define the un-normalized logits γ̂ and then take the softmax.
θ̂ = We, (36)
then ∂θ̂
∂τ =
∂(We) ∂τ = −W ∂e ∂τ − ∂W ∂τ e. (37)
We first analyze ϵ := −W ∂e∂τ . Since ϵ ∈ R |I|×|O|, we analyze each entry ϵi,o. Since differentiation operation and left multiplication by matrix W is linear, we analyze each individual loss term in Equation 34 and then sum them up.
We define
pm := σ( 1
Lm Lm∑ l=1 Wexml ) (38)
and Lmt := − log(pmymt ); ϵ m t,i,o := Wo ∂Lmt ∂ei . (39) Hence,
L = ∑ m Tm∑ t=1 Lmt ; ϵi,o = ∑ m Tm∑ t=1 ϵmt,i,o. (40)
Therefore,
−∂L m t
∂ei =
1
Lm Lm∑ l=1 1[xml = i](Wymt − |O|∑ o=1 pmo Wo). (41)
Hence,
ϵmt,i,ymt = −W T ymt ∂Lmt ∂ei = 1 Lm Lm∑ l=1 1[xml = i] (42)
(||Wymt || 2 2 − |O|∑ o=1 pmo W T ymt Wo),
while for o′ ̸= ymt ,
ϵmt,i,o′ = −WTo′ ∂Lmt ∂ei = 1 Lm Lm∑ l=1 1[xml = i] (43)
(WTo′Wymt − |O|∑ o=1 pmo W T o′Wo).
If Wo and ei are each sampled i.i.d. from N (0, Id/d), then by central limit theorem:
∀o ̸= o′, √ dWTo Wo′ p→ N (0, 1), (44)
∀o, i, √ dWTo ei
p→ N (0, 1), (45) and
∀o, √ d(||Wo||22 − 1)
p→ N (0, 2). (46) Therefore, when τ = 0,
lim d→∞
ϵmt,i,o p→ 1
Lm Lm∑ l=1 1[xml = i](1[y t l = o]− 1 |O| ). (47)
Summing over all the ϵmt,i,o terms, we have that
ϵi,o = Ci,o − 1 |O| ∑ o′ Ci,o′ , (48)
where C is defined as
Ci,o := ∑ m Lm∑ l=1 Tm∑ t=1 1 Lm 1[xml = i]1[y m t = o]. (49)
We find that −∂W∂τ e converges exactly to the same value. Hence
∂θ̂i,o ∂τ = ∂We ∂τ = 2(Ci,o − 1 |O| ∑ o′ Ci,o′). (50)
Since limd→∞ θ(τ = 0) p→ 1|O|1 |I|×|O|, by chain rule,
lim d→∞ ∂γi,o ∂τ (τ = 0) p→ 2(Ci,o − 1 |O| ∑ o′∈O Ci,o′). (51)
A.3 MIXTURE OF PERMUTATIONS
For this experiment, each input is either a random permutation of the set {1 . . . 40}, or a random permutation of the set {41 . . . 80}. The proxy model can easily learn whether the input words are less than 40 and decide whether the output words are all less than 40. However, βpx is still the same for every position; as a result, the attention and hence the model fail to learn. The count table C can be see in Figure 6.
A.4 ADDITIONAL TABLES FOR COMPLETENESS
We report several variants of Table 1. We chose to use token accuracy to contextualize the agreement metric in the main paper, because the errors would accumulate much more if we use a not-fully trained model to auto-regressively generate output words.
• Table 2 contains the same results as Table 1, except that its agreement score A(u, v) is now Kendall Tau rank correlation coefficient, which is a more popular metric.
• Table 4 contains the same results as Table 1, except that results are now rounded to two decimal places.
• Table 6 consists of the same results as Table 1, except that the statistics is calculated over the training set rather than the validation set.
• Table 3, Table 5, and Table 7 contain the translation results from the above 3 mentioned tables respectively, except that p̂ is defined as BLEU score rather than token accuracy, and hence the contextualized metric interpretation ξ changes correspondingly.
A.5 DATASET DESCRIPTION
We summarize the datasets that we use for classification and machine translation. See Table 8 for details on train/test splits and median sequence lengths for each dataset.
IMDB Sentiment Analysis Maas et al. (2011) A sentiment analysis data set with 50,000 (25,000 train and 25,000 test) IMDB movie reviews and their corresponding positive or negative sentiment.
AG News Corpus Zhang et al. (2015) 120,000 news articles and their corresponding topic (world, sports, business, or science/tech). We classify between the world and business articles.
20 Newsgroups 4 A news data set containing around 18,000 newsgroups articles split between 20 different labeled categories. We classify between baseball and hocky articles.
Stanford Sentiment Treebank Socher et al. (2013) A data set for classifying the sentiment of movie reviews, labeled on a scale from 1 (negative) to 5 (positive). We remove all movies labeled as 3, and classify between 4 or 5 and 1 or 2.
Multi Domain Sentiment Data set 5 Approximately 40,000 Amazon reviews from various product categories labeled with a corresponding positive or negative label. Since some of the sequences are particularly long, we only use sequences of length less than 400 words.
Yelp Open Data Set 6 20,000 Yelp reviews and their corresponding star rating from 1 to 5. We classify between reviews with rating ≤ 2 and ≥ 4. Multi-30k Elliott et al. (2016) English to German translation. The data is from translation image captions.
IWSLT’14 Cettolo et al. (2015) German to English translation. The data is from translated TED talk transcriptions.
News Commentary v14 Cettolo et al. (2015) A collection of translation news commentary datasets in different languages from WMT19 7. We use the following translation splits: English-Dutch (En-Nl), English-Portuguese (En-Pt), and Italian-Portuguese (It-Pt). In pre-processing for this dataset, we removed all purely numerical examples.
A.6 α FAILS WHEN β IS FROZEN
For each classification task we initialize a random model and freeze all parameters except for the attention layer (frozen β model). We then compute the correlation between this trained attention (defined as αfr) and the normal attention α. Table 9 reports this correlation at the iteration where αfr is most correlated with α on the validation set. As shown in Table 9, the left column is consistently lower than the right column. This indicates that the model can learn output relevance without attention, but not vice versa.
A.7 TRAINING βuf
We find that A(α, βuf(τ)) first increases and then decreases as training proceeds (i.e. τ increases), so we chose the maximum agreement to report in Table 1 over the course of training. Since this trend is consistent across all datasets, our choice minimally inflates the agreement measure, and is comparable to the practice of reporting dev set results. As discussed in Section 6.1, training under uniform attention for too long might bring unintuitive results,
A.8 MODEL AND TRAINING DETAILS
Classification Our model uses dimension 300 GloVe-6B pre-trained embeddings to initialize the token embeddings where they aligned with our vocabulary. The sequences are encoded with a 1 layer bidirectional LSTM of dimension 256. The rest of the model, including the attention mechanism, is exactly as described in 2.4. Our model has 1,274,882 parameters excluding embeddings. Since each classification set has a different vocab size each model has a slightly different parameter count when considering embeddings: 19,376,282 for IMDB, 10,594,382 for AG News, 5,021,282 for 20
4http://qwone.com/ jason/20Newsgroups/ 5https://www.cs.jhu.edu/ mdredze/datasets/sentiment/ 6https://www.yelp.com/dataset 7http://www.statmt.org/wmt19/translation-task.html
Newsgroups, 4,581,482 for SST, 13,685,282 for Yelp, 12,407,882 for Amazon, and 2,682,182 for SMS.
Translation We use a a bidirectional two layer bi-LSTM of dimension 256 to encode the source and the use last hidden state hL as the first hidden state of the decoder. The attention and outputs are then calculated as described in 2. The learn-able neural network before the outputs that is mentioned in Section 2, is a 1 hidden layer model with ReLU non-linearity. The hidden layer is dimension 256. Our model contains 6,132,544 parameters excluding embeddings and 8,180,544 including embeddings on all datasets.
Permutation Copying We use single directional single layer LSTM with hidden dimension 256 for both the encoder and the decoder.
Classification Procedure For all classification datasets we used a batch size of 32. We trained for 4000 iterations on each dataset. For each dataset we train on the pre-defined training set if the dataset has one. Additionally, if a dataset had a predefined test set, we randomly sample at most 4000 examples from this test set for validation. Specific dataset split sizes are given in Table 8.
Classification Evaluation We evaluated each model at steps 0, 10, 50, 100, 150, 200, 250, and then every 250 iterations after that.
Classification Tokenization We tokenized the data at the word level. We mapped all words occurring less than 3 times in the training set to <unk>. For 20 Newsgroups and AG News we mapped all non-single digit integer ”words” to <unk>. For 20 Newsgroups we also split words with the ” ” character.
Classification Training We trained all classification models on a single GPU. Some datasets took slightly longer to train than others (largely depending on average sequence length), but each train took at most 45 minutes.
Translation Hyper Parameters For translation all hidden states in the model are dimension 256. We use the sequence to sequence architecture described above. The LSTMs used dropout 0.5.
Translation Procedure For all translation tasks we used batch size 16 when training. For IWSLT’14 and Multi-30k we used the provided dataset splits. For the News Commentary v14 datasets we did a 90-10 split of the data for training and validation respectively.
Translation Evaluation We evaluated each model at steps 0, 50, 100, 500, 1000, 1500, and then every 2000 iterations after that.
Translation Training We trained all translation models on a single GPU. IWSLT’14, and the News Commentary datasets took approximately 5-6 hours to train, and multi-30k took closer to 1 hour to train.
Translation Tokenization We tokenized both translation datasets using the Sentence-Piece tokenizer trained on the corresponding train set to a vocab size of 8,000. We used a single tokenization for source and target tokens. And accordingly also used the same matrix of embeddings for target and source sequences.
A.9 A NOTE ON SMS DATASET
In addition to the classification datasets reported in the tables, we also ran experiments on the SMS Spam Collection V.1 dataset 8. The attention learned from this dataset was very high variance, and so two different random seeds would consistently produce attentions that did not correlate much. The dataset itself was also a bit of an outlier; it had shorter sequence lengths than any of the other datasets (median sequence length 13 on train and validation set), it also had the smallest training set out of all our datasets (3500 examples), and it had by far the smallest vocab (4691 unique tokens). We decided not to include this dataset in the main paper due to these unusual results and leave further exploration to future works.
A.10 LOGISTIC REGRESSION PROXY MODEL
Our proxy model can be shown to be equivalent to a bag-of-words logistic regression model in the classification case. Specifically, we define a bag-of-words logistic regression model to be:
∀t, pt = σ(βlogx). (52)
where x ∈ R|I|, βlog ∈ R|O|×|I|, and σ is the softmax function. The entries in x are the number of times each word occurs in the input sequence, normalized by the sequence length. and βlog is learned. This is equivalent to:
8http://www.dt.fee.unicamp.br/ tiago/smsspamcollection/
∀t, pt = σ( 1
L L∑ l=1 βlogxl ). (53)
Here βlogi indicates the ith column of β log; these are the entries in βlog corresponding to predictions for the ith word in the vocab. Now it is easy to arrive at the equivalence between logistic regression and our proxy model. If we restrict the rank of βlog to be at most min(d, |O|, |I|) by factoring it as βlog = WE where W ∈ R|O|×d and E ∈ Rd×|I|, then the logistic regression looks like:
∀t, pt = σ( 1
L L∑ l=1 WExl), (54)
which is equivalent to our proxy model:
∀t, pt = σ( 1
L L∑ l=1 Wexl). (55)
Since d = 256 for the proxy model, which is larger than |O| = 2 in the classification case, the proxy model is not rank limited and is hence fully equivalent to the logistic regression model. Therefore the βpx can be interpreted as ”keywords” in the same way that the logistic regression weights can.
To empirically verify this equivalence, we trained a logistic regression model with ℓ2 regularization on each of our classification datasets. To pick the optimal regularization level, we did a sweep of regularization coefficients across ten orders of magnitude and picked the one with the best validation accuracy. We report results for A(βuf , βlog) in comparison to A(βuf , βpx) in Table 10 9. Note that these numbers are similar but not exactly equivalent. The reason is that the proxy model did not use ℓ2 regularization, while logistic regression did.
9These numbers were obtained from a retrain of all the models in the main table, so for instance, the LSTM model used to produce βuf might not be exactly the same as the one used for the results in all the other tables due to random seed difference. | 1. What is the main contribution of the paper regarding attention mechanisms?
2. What are the strengths and weaknesses of the proposed framework?
3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
4. Are there any concerns or suggestions regarding the limitations of the proposed approach? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper presents a framework explaining how attention might be learned. First, the model would get the knowledge to translate individual words (KTIW) based on word co-occurences, which can be learned if the attention weights are uniform. KTIW then drives the learning of the attention mechanism.
Strengths And Weaknesses
Strengths
The experiment showing that copying sequences is a difficult task under some constraints is insightful.
The paper shows how multi-head attention can improve learning dynamics.
Weaknesses
The paper presents a plausible 2-stage learning approach for attention, but it doesn't really show that this happens in practice. For example, we could monitor the entropies of the output distributions and of the attention weights during training.
The simplifying assumptions are quite strong. Under the proposed framework, we may not be able to explain phenomena such as Figure 9 in [1], where attention does not necessarily match work alignment.
[1] Koehn and Knowles. Six Challenges for Neural Machine Translation. First Workshop on Neural Machine Translation. 2017
Clarity, Quality, Novelty And Reproducibility
The quality of this paper would increase if it more clearly demonstrated that the proposed framework matches learning dynamics on non-toy tasks. The paper is mostly clear. To my knowledge, the work is original, but the contributions are arguably limited. The paper should be mostly reproducible with the attached code. |
ICLR | Title
How Does SimSiam Avoid Collapse Without Negative Samples? A Unified Understanding with Self-supervised Contrastive Learning
Abstract
To avoid collapse in self-supervised learning (SSL), a contrastive loss is widely used but often requires a large number of negative samples. Without negative samples yet achieving competitive performance, a recent work (Chen & He, 2021) has attracted significant attention for providing a minimalist simple Siamese (SimSiam) method to avoid collapse. However, the reason for how it avoids collapse without negative samples remains not fully clear and our investigation starts by revisiting the explanatory claims in the original SimSiam. After refuting their claims, we introduce vector decomposition for analyzing the collapse based on the gradient analysis of the l2-normalized representation vector. This yields a unified perspective on how negative samples and SimSiam alleviate collapse. Such a unified perspective comes timely for understanding the recent progress in SSL.
1 INTRODUCTION
Beyond the success of NLP (Lan et al., 2020; Radford et al., 2019; Devlin et al., 2019; Su et al., 2020; Nie et al., 2020), self-supervised learning (SSL) has also shown its potential in the field of vision tasks (Li et al., 2021; Chen et al., 2021; El-Nouby et al., 2021). Without the ground-truth label, the core of most SSL methods lies in learning an encoder with augmentation-invariant representation (Bachman et al., 2019; He et al., 2020; Chen et al., 2020a; Caron et al., 2020; Grill et al., 2020). Specifically, they often minimize the representation distance between two positive samples, i.e. two augmented views of the same image, based on a Siamese network architecture (Bromley et al., 1993). It is widely known that for such Siamese networks there exists a degenerate solution, i.e. all outputs “collapsing” to an undesired constant (Chen et al., 2020a; Chen & He, 2021). Early works have attributed the collapse to lacking a repulsive component in the optimization goal and adopted contrastive learning (CL) with negative samples, i.e. views of different samples, to alleviate this problem. Introducing momentum into the target encoder, BYOL shows that Siamese architectures can be trained with only positive pairs. More recently, SimSiam (Chen & He, 2021) has caught great attention by further simplifying BYOL by removing the momentum encoder, which has been seen as a major milestone achievement in SSL for providing a minimalist method for achieving competitive performance. However, more investigation is required for the following question:
How does SimSiam avoid collapse without negative samples?
Our investigation starts with revisiting the explanatory claims in the original SimSiam paper (Chen & He, 2021). Notably, two components, i.e. stop gradient and predictor, are essential for the success of SimSiam (Chen & He, 2021). The reason has been mainly attributed to the stop gradient (Chen & He, 2021) by hypothesizing that it implicitly involves two sets of variables and SimSiam behaves like alternating between optimizing each set. Chen & He argue that the predictor h is helpful in SimSiam because h fills the gap to approximate expectation over augmentations (EOA).
Unfortunately, the above explanatory claims are found to be flawed due to reversing the two paths with and without gradient (see Sec. 2.2). This motivates us to find an alternative explanation, for which we introduce a simple yet intuitive framework for facilitating the analysis of collapse in SSL.
∗equal contribution
Specifically, we propose to decompose a representation vector into center and residual components. This decomposition facilitates understanding which gradient component is beneficial for avoiding collapse. Under this framework, we show that a basic Siamese architecture cannot prevent collapse, for which an extra gradient component needs to be introduced. With SimSiam interpreted as processing the optimization target with an inverse predictor, the analysis of its extra gradient shows that (a) its center vector helps prevent collapse via the de-centering effect; (b) its residual vector achieves dimensional de-correlation which also alleviates collapse.
Moreover, under the same gradient decomposition, we find that the extra gradient caused by negative samples in InfoNCE (He et al., 2019; Chen et al., 2020b;a; Tian et al., 2019; Khosla et al., 2020) also achieves de-centering and de-correlation in the same manner. It contributes to a unified understanding on various frameworks in SSL, which also inspires the investigation of hardnessawareness Wang & Liu (2021) from the inter-anchor perspective Zhang et al. (2022) for further bridging the gap between CL and non-CL frameworks in SSL. Finally, simplifying the predictor for more explainable SimSiam, we show that a single bias layer is sufficient for preventing collapse.
The basic experimental settings for our analysis are detailed in Appendix A.1 with a more specific setup discussed in the context. Overall, our work is the first attempt for performing a comprehensive study on how SimSiam avoids collapse without negative samples. Several works, however, have attempted to demystify the success of BYOL (Grill et al., 2020), a close variant of SimSiam. A technical report (Fetterman & Albrecht, 2020) has suggested the importance of batch normalization (BN) in BYOL for its success, however, a recent work (Richemond et al., 2020) refutes their claim by showing BYOL works without BN, which is discussed in Appendix B.
2 REVISITING SIMSIAM AND ITS EXPLANATORY CLAIMS
l2-normalized vector and optimization goal. SSL trains an encoder f for learning discriminative representation and we denote such representation as a vector z, i.e. f(x) = z where x is a certain input. For the augmentation-invariant representation, a straightforward goal is to minimize the distance between the representations of two positive samples, i.e. augmented views of the same image, for which mean squared error (MSE) is a default choice. To avoid scale ambiguity, the vectors are often l2-normalized, i.e. Z = z/||z|| (Chen & He, 2021), before calculating the MSE:
LMSE = (Za −Zb)2/2− 1 = −Za ·Zb = Lcosine, (1) which shows the equivalence of a normalized MSE loss to the cosine loss (Grill et al., 2020).
Collapse in SSL and solution of SimSiam. Based on a Siamese architecture, the loss in Eq 1 causes the collapse, i.e. f always outputs a constant regardless of the input variance. We refer to this Siamese architecture with loss Eq 1 as Naive Siamese in the remainder of paper. Contrastive loss with negative samples is a widely used solution (Chen et al., 2020a). Without using negative samples, SimSiam solves the collapse problem via predictor and stop gradient, based on which the encoder is optimized with a symmetric loss:
LSimSiam = −(Pa · sg(Zb) + Pb · sg(Za)), (2) where sg(·) is stop gradient and P is the output of predictor h, i.e. p = h(z) and P = p/||p||.
2.1 REVISING EXPLANATORY CLAIMS IN SIMSIAM
Interpreting stop gradient as AO. Chen & He hypothesize that the stop gradient in Eq 2 is an implementation of Alternating between the Optimization of two sub-problems, which is denoted as AO. Specifically, with the loss considered as L(θ, η) = Ex,T [∥∥Fθ(T (x)) − ηx∥∥2], the optimization objective minθ,η L(θ, η) can be solved by alternating ηt ← arg minη L(θt, η) and θt+1 ← arg minθ L(θ, ηt). It is acknowledged that this hypothesis does not fully explain why the collapse is prevented (Chen & He, 2021). Nonetheless, they mainly attribute SimSiam success to the stop gradient with the interpretation that AO might make it difficult to approach a constant ∀x. Interpreting predictor as EOA. The AO problem (Chen & He, 2021) is formulated independent of predictor h, for which they believe that the usage of predictor h is related to approximating EOA for filling the gap of ignoring ET [·] in a sub-problem of AO. The approximation of ET [·] is summarized
in Appendix A.2. Chen & He support their interpretation by proof-of-concept experiments. Specifically, they show that updating ηx with a moving-average ηtx ← m ∗ ηtx + (1 −m) ∗ Fθt(T ′(x)) can help prevent collapse without predictor (see Fig. 1 (b)). Given that the training completely fails when the predictor and moving average are both removed, at first sight, their reasoning seems valid.
2.2 DOES THE PREDICTOR FILL THE GAP TO APPROXIMATE EOA?
Reasoning flaw. Considering the stop gradient, we divide the framework into two sub-models with different paths and term them Gradient Path (GP) and Stop Gradient Path (SGP). For SimSiam, only the sub-model with GP includes the predictor (see Fig. 1 (a)). We point out that their reasoning flaw of predictor analysis lies in the reverse of GP and SGP. By default, the moving-average sub-model, as shown in Fig. 1 (b), is on the same side as SGP. Note that Fig. 1 (b) is conceptually similar to Fig. 1 (c) instead of Fig. 1 (a). It is worth mentioning that the Mirror SimSiam in Fig. 1 (c) is what stop gradient in the original SimSiam avoids. Therefore, it is problematic to perceive h as EOA.
New Figure 1
Explicit EOA does not prevent collapse. (Chen & He, 2021) points out that “in practice, it would be unrealistic to actually compute the expectation ET [·]. But it may be possible for a neural network (e.g., the preditor h) to learn to predict the expectation, while the sampling of T is implicitly distributed across multiple epochs.” If implicitly sampling across multiple epochs is beneficial, explicitly sampling sufficient large N augmentations in a batch with the latest model would be more beneficial to approximate ET [·]. However, Table 1 shows that the collapse still occurs and suggests that the equivalence between predictor and EOA does not hold.
2.3 ASYMMETRIC INTERPRETATION OF PREDICTOR WITH STOP GRADIENT IN SIMSIAM
Symmetric Predictor does not prevent collapse. The difference between Naive Siamese and Simsiam lies in whether the gradient in backward propagation flows through a predictor, however, we show that this propagation helps avoid collapse only when the predictor is not included in the SGP path. With h being trained the same as Eq 2, we optimize the encoder f through replacing the Z in Eq 2 with P . The results in Table. 2 show that it still leads to collapse. Actually, this is well expected by perceiving h to be part of the new encoder F , i.e. p = F (x) = h(f(x)). In other words, the symmetric architectures with and without predictor h both lead to collapse.
Predictor with stop gradient is asymmetric. Clearly, how SimSiam avoids collapse lies in its asymmetric architecture, i.e. one path with h and the other without h. Under this asymmetric architecture, the role of stop gradient is to only allow the path with predictor to be optimized with the encoder output as the target, not vice versa. In other words, the SimSiam avoids collapse by excluding Mirror SimSiam (Fig. 1 (c)) which has a loss (mirror-like Eq 2) asLMirror = −(Pa ·Zb+Pb ·Za), where stop gradient is put on the input of h, i.e. pa = h(sg[za]) and pb = h(sg[zb]).
Predictor vs. inverse predictor. We interpret h as a function mapping from z to p, and introduce a conceptual inverse mapping h−1, i.e. z = h−1(p). Here, as shown in Table 2, SimSiam with symmetric predictor (Fig. 2 (b)) leads to collapse, while SimSiam (Fig. 1 (a)) avoids collapse. With the conceptual h−1, we interpret Fig. 1 (a) the same as Fig. 2 (c) which differs from Fig. 2 (b) via changing the optimization target from pb to zb, i.e. zb = h−1(pb). This interpretation
suggests that the collapse can be avoided by processing the optimization target with h−1. By contrast, Fig. 1 (c) and Fig. 2 (a) both lead to collapse, suggesting that processing the optimization target with h is not beneficial for preventing collapse. Overall, asymmetry alone does not guarantee collapse avoidance, which requires the optimization target to be processed by h−1 not h.
Trainable inverse predictor and its implication on EOA. In the above, we propose a conceptual inverse predictor h−1 in Fig. 2 (c), however, it remains yet unknown whether such an inverse predictor is experimentally trainable. A detailed setup for this investigation is reported in Appendix A.5. The results in Fig. 3 show that a learnable h−1 leads to slightly inferior performance, which is expected because h−1 cannot make the trainable inverse predictor output z∗b completely the same as zb. Note that it would be equivalent to SimSiam if z∗b = zb. Despite a slight performance drop, the results confirm that h−1 is trainable. The fact that h−1 is trainable provides additional evidence that the role h plays in SimSiam is not EOA
because theoretically h−1 cannot restore a random augmentation T ′ from an expectation p, where p = h(z) = ET [ Fθt(T (x)) ] .
3 VECTOR DECOMPOSITION FOR UNDERSTANDING COLLAPSE
By default, InfoNCE (Chen et al., 2020a) and SimSiam (Chen & He, 2021) both adopt l2normalization in their loss for avoiding scale ambiguity. We treat the l2-normalized vector, i.e. Z, as the encoder output, which significantly simplifies gradient derivation and the following analysis.
Vector decomposition. For the purpose of analysis, we propose to decompose Z into two parts, Z = o + r, where o, r denote center vector and residual vector respectively. Specifically, the center vector o is defined as an average of Z over the whole representation space oz = E[Z]. However, we approximate it with all vectors in current mini-batch, i.e. oz = 1M ∑M m=1 Zm, where M is the mini-batch size. We define the residual vector r as the residual part of Z, i.e. r = Z − oz .
3.1 COLLAPSE FROM THE VECTOR PERSPECTIVE
Collapse: from result to cause. A Naive Siamese is well expected to collapse since the loss is designed to minimize the distance between positive samples, for which a constant constitutes an optimal solution to minimize such loss. When the collapse occurs, ∀i,Zi = 1M ∑M m=1 Zm = oz , where i denotes a random sample index, which shows the constant vector is oz in this case. This interpretation only suggests a possibility that a dominant o can be one of the viable solutions, while the optimization, such as SimSiam, might still lead to a non-collapse solution. This merely describes o as the consequence of the collapse, and our work investigates the cause of such collapse through analyzing the influence of individual gradient components, i.e. o and r during training.
Competition between o and r. Complementary to the Standard Deviation (Std) (Chen & He, 2021), for indicating collapse, we introduce the ratio of o in z, i.e. mo = ||o||/||z||, where || ∗ || is the L2 norm. Similarly, the ratio of r in z is defined as mr = ||r||/||z||. When collapse happens, i.e. all vectors Z are close to the center vector o, mo approaches 1 and mr approaches 0, which is not desirable for SSL. A desirable case would be a relatively small mo and a relatively large mr, suggesting a relatively small (large) contribution of o (r) in each Z. We interpret the cause of collapse as a competition between o and r where o dominates over r, i.e. mo mr. For Eq 1, the derived negative gradient on Za (ignoring Zb for simplicity due to symmetry) is shown as:
Gcosine = − ∂LMSE ∂Za = Zb −Za ⇐⇒ − ∂Lcosine ∂Za = Zb, (3)
where the gradient component Za is a dummy term because the loss −Za · Za = −1 is a constant having zero gradient on the encoder f .
Conjecture1. With Za = oz + ra, we conjecture that the gradient component of oz is expected to update the encoder to boost the center vector thus increasemo, while the gradient component of ra is expected to behave in the opposite direction to increase mr. A random gradient component is expected to have a relatively small influence.
To verify the above conjecture, we revisit the dummy gradient term Za. We design loss −Za · sg(oz) and −Za · sg(Za − oz) to show the influence of gradient component o and ra respectively. The results in Fig. 4 show that the gradient component oz has the effect of increasingmo while decreasingmr. On the
contrary, ra helps increase mr while decreasing mo. Overall, the results verify Conjecture1.
3.2 EXTRA GRADIENT COMPONENT FOR ALLEVIATING COLLAPSE
Revisit collapse in a symmetric architecture. Based on Conjecture1, here, we provide an intuitive interpretation on why a symmetric Siamese architecture, such as Fig. 2 (a) and (b), cannot be trained without collapse. Take Fig. 2 (a) as example, the gradient in Eq 3 can be interpreted as two equivalent forms, from which we choose Zb−Za = (oz+rb)−(oz+ra) = rb−ra. Since rb comes from the same positive sample as ra, it is expected that rb also increases mr, however, this effect is expected to be smaller than that of ra, thus causing collapse.
Basic gradient and Extra gradient components. The negative gradient on Za in Fig. 2 (a) is derived as Zb, while that on Pa in Fig. 2 (b) is derived as Pb. We perceive Zb and Pb in these basic Siamese architectures as the Basic Gradient. Our above interpretation shows that such basic components cannot prevent collapse, for which an Extra Gradient component, denoted as Ge, needs to be introduced to break the symmetry. As the term suggests, Ge is defined as a gradient term that is relative to the basic gradient in a basic Siamese architecture. For example, negative samples can be introduced to Naive Siamese (Fig. 2 (a)) for preventing collapse, where the extra gradient caused by negative samples can thus be perceived as Ge with Zb as the basic gradient. Similarly, we can also disentangle the negative gradient on Pa in SimSiam (Fig. 1 (a)), i.e. Zb, into a basic gradient (which is Pb) and Ge which is derived as Zb −Pb (note that Zb = Pb + Ge). We analyze how Ge prevents collapse via studying the independent roles of its center vector oe and residual vector re.
3.3 A TOY EXAMPLE EXPERIMENT WITH NEGATIVE SAMPLE
Which repulsive component helps avoid collapse? Existing works often attribute the collapse in Naive Siamese to lacking a repulsive part during the optimization. This explanation has motivated previous works to adopt contrastive learning, i.e. attracting the positive samples while repulsing the negative samples. We experiment with a simple triplet loss1, Ltri = −Za·sg(Zb −Zn), where Zn indicates the representation of a Negative sample. The derived negative gradient on Za is Zb −Zn, where Zb is the basic gradient component and thus Ge = −Zn in this setup. For a sample representation, what determines it as a positive sample for attracting or a negative sample for repulsing is the residual component, thus it might be tempting to interpret that re is the key component of repulsive part that avoids the collapse. However, the results in Table 3 show that the component beneficial for preventing collapse inside Ge is oe instead of re. Specifically, to explore the individual influence of oe and re in the Ge, we design two experiments by removing one component while keeping the other one. In the first experiment, we remove the re in Ge while keeping the oe. By contrast, the oe is removed while keeping the re in the second experiment. In contrast to what existing explanations may expect, we find that the residual component oe prevents collapses. With Conjecture1, a gradient component alleviates collapse if it has negative center vector. In this setup, oe = −oz , thus oe has the de-centering role for preventing collapse. On the contrary, re does not prevent collapse and keeping re even decreases the performance (36.21% < 47.41%). Since the negative sample is randomly chosen, re just behaves like a random noise on the optimization to decrease performance.
3.4 DECOMPOSED GRADIENT ANALYSIS IN SIMSIAM
It is challenging to derive the gradient on the encoder output in SimSiam due to a nonlinear MLP module in h. The negative gradient on Pa for LSimSiam in Eq 2 can be derived as
GSimSiam = − ∂LSimSiam
∂Pa = Zb = Pb + (Zb − Pb) = Pb + Ge, (4)
oe re Collapse Top-1 (%) X X × 66.62 X × × 48.08 × X × 66.15 × × X 1
Table 4: Gradient component analysis for SimSiam.
where Ge indicates the aforementioned extra gradient component. To investigate the influence of oe and re on the collapse, similar to the analysis with the toy example experiment in Sec. 3.3, we design the experiment by removing one component while keeping the other. The results are reported in Table 4. As expected, the model collapses when both components in Ge are removed and the best performance is achieved when both components are kept. Interestingly, the model does not collapse when
either oe or re is kept. To start, we analyze how oe affects the collapse based on Conjecture1.
How oe alleviates collapse in SimSiam. Here, op is used to denote the center vector of P to differentiate from the above introduced oz for denoting that of Z. In this setup Ge = Zb − Pb, thus the residual gradient component is derived to be oe = oz − op. With Conjecture1, it is well expected that oe helps prevent collapse if oe contains negative op since the analyzed vector is Pa. To determine the amount of component of op existing in oe, we measure the cosine similarity between oe − ηpop and op for a wide range of ηp. The results in Fig. 5 (a) show that their cosine similarity is zero when ηp is around −0.5, suggesting oe has ≈ −0.5op. With Conjecture1, this negative ηp explains why SimSiam avoids collapse from the perspective of de-centering.
How oe causes collapse in Mirror SimSiam. As mentioned above, the collapse occurs in Mirror SimSiam, which can also be explained by analyzing its oe. Here, oe = op − oz , for which we evaluate the amount of component oz existing in oe via reporting the similarity between oe − ηzoz
1Note that the triplet loss here does not have clipping form as in Schroff et al. (2015) for simplicity.
and oz . The results in Fig. 5 (a) show that their cosine similarity is zero when ηz is set to around 0.2. This positive ηz explains why Fig. 1(c) causes collapse from the perspective of de-centering.
Overall, we find that processing the optimization target with h−1, as in Fig. 2 (c), alleviates collapse (ηp ≈ −0.5), while processing it with h, as in Fig. 1(c), actually strengthens the collapse (ηz ≈ 0.2). In other words, via the analysis of oe, our results help explain how SimSiam avoids collapse as well as how Mirror SimSiam causes collapse from a straightforward de-centering perspective.
Relation to prior works. Motivated from preventing the collapse to a constant, multiple prior works, such as W-MSE (Ermolov et al., 2021), Barlow-twins (Zbontar et al., 2021), DINO (Caron et al., 2021), explicitly adopt de-centering to prevent collapse. Despite various motivations, we find that they all implicitly introduce an oe that contains a negative center vector. The success of their approaches aligns well with our Conjecture1 as well as our above empirical results. Based on our findings, we argue that the effect of de-centering can be perceived as oe having a negative center vector. With this interpretation, we are the first to demonstrate that how SimSiam with predictor and stop gradient avoids collapse can be explained from the perspective of de-centering.
Beyond de-centering for avoiding collapse. In the toy example experiment in Sec. 3.3, re is found to be not beneficial for preventing collapse and keeping re even decreases the performance. Interestingly, as shown in Table 4, we find that re alone is sufficient for preventing collapse and achieves comparable performance as Ge. This can be explained from the perspective of dimensional de-correlation, which will be discussed in Sec. 3.5.
3.5 DIMENSIONAL DE-CORRELATION HELPS PREVENT COLLAPSE
Conjecture2 and motivation. We conjecture that dimensional de-correlation increases mr for preventing collapse. The motivation is straightforward as follows. The dimensional correlation would be minimum if only a single dimension has a very high value for every individual class and the dimension changes for different classes. In another extreme case, when all the dimensions have the same values, equivalent to having a single dimension, which already collapses by itself in the sense of losing representation capacity. Conceptually, re has no direct influence on the center vector, thus we interpret that re prevents collapse through increasing mr.
To verify the above conjecture, we train SimSiam normally with the loss in Eq 2 and train for several epochs with the loss in Eq 1 for intentionally decreasing the mr to close to zero. Then, we train the loss with only a correlation regularization term, which is detailed in Appendix A.6. The results in Fig. 5 (b) show that this regularization term increases mr at a very fast rate.
Dimensional de-correlation in SimSiam. Assuming h only has a single FC layer to exclude the influence of oe, the weights in FC are expected to learn the correlation between different dimensions for the encoder output. This interpretation echos well with the finding that the eigenspace of hweight aligns well with that of correlation matrix (Tian et al., 2021). In essence, the h is trained to minimize the cosine similarity between h(za) and I(zb), where I is identity mapping. Thus, h that learns the correlation is optimized close to I , which is conceptually equivalent to optimizing with the goal of de-correlation for Z. As shown in Table 4, for SimSiam, re alone also prevents collapse, which
is attributed to the de-correlation effect since re has no de-centering effect. We observe from Fig. 6 that except in the first few epochs, SimSiam decreases the covariance during the whole training. Fig. 6 also reports the results for InfoNCE which will be discussed in Sec. 4.
4 TOWARDS A UNIFIED UNDERSTANDING OF RECENT PROGRESS IN SSL
De-centering and de-correlation in InfoNCE. InfoNCE loss is a default choice in multiple seminal contrastive learning frameworks (Sohn, 2016; Wu et al., 2018; Oord et al., 2018; Wang & Liu, 2021). The derived negative gradient of InfoNCE on Za is proportional to Zb + ∑N i=0−λiZi, where λi = exp(Za·Zi/τ)∑N i=0 exp(Za·Zi/τ) , and Z0 = Zb for notation simplicity. See Appendix A.7 for the detailed
derivation. The extra gradient component Ge = ∑N i=0−λiZi = −oz − ∑N i=0 λiri, for which
oe = −oz and re = − ∑N i=0 λiri. Clearly, oe contains negative oz as de-centering for avoiding collapse, which is equivalent to the toy example in Sec. 3.3 when the re is removed. Regarding re, the main difference between Ltri in the toy example and InfoNCE is that the latter exploits a batch of negative samples instead of a random one. λi is proportional to exp(Za·Zi), indicating that a large weight is put on the negative sample when it is more similar to the anchor Za, for which, intuitively, its dimensional values tend to have a high correlation with Za. Thus, re containing such negative representation with a high weight tends to decrease dimensional correlation. To verify this intuition, we measure the cosine similarity between re and the gradient on Za induced by a correlation regularization loss. The results in Fig. 5 (c) show that their gradient similarity is high for a wide range of temperature values, especially when τ is around 0.1 or 0.2, suggesting re achieves similar role as an explicit regularization loss for performing de-correlation. Replacing re with oe leads to a low cosine similarity, which is expected because oe has no de-correlation effect.
The results of InfoNCE in Fig. 6 resembles that of SimSiam in terms of the overall trend. For example, InfoNCE also decreases the covariance value during training. Moreover, we also report the results of InfoNCE where re is removed for excluding the de-correlation effect. Removing re from the InfoNCE loss leads to a high covariance value during the whole training. Removing re also leads to a significant performance drop, which echos with the finding in (Bardes et al., 2021) that dimensional de-correlation is essential for competitive performance. Regarding how re in InfoNCE achieves de-correlation, formally, we hypothesize that the de-correlation effect in InfoNCE arises from the biased weights (λi) on negative samples. This hypothesis is corroborated by the temperature analysis in Fig. 7. We find that a higher temperature makes the weight distribution of λi more balanced indicated a higher entropy of λi, which echos with the finding in (Wang & Liu, 2021). Moreover, we observe that a higher temperature also tends to increase the covariance value. Overall, with temperature as the control variable, we find that more balanced weights among negative samples decrease the de-correlation effect, which constitutes an evidence for our hypothesis.
Unifying SimSiam and InfoNCE. At first sight, there is no conceptual similarity between SimSiam and InfoNCE, and this is why the community is intrigued by the success of SimSiam without negative samples. Through decomposing the Ge into oe and re, we find that for both, their oe plays the role of de-centering and their re behaves like de-correlation. In this sense, we bring two seemingly irrelevant frameworks into a unified perspective with disentangled de-centering and de-correlation.
Beyond SimSiam and InfoNCE. In SSL, there is a trend of performing explicit manipulation of de-centering and de-correlation, for which W-MSE (Ermolov et al., 2021), Barlow-twins (Zbontar et al., 2021), DINO (Caron et al., 2021) are three representative works. They often achieve performance comparable to those with InfoNCE or SimSiam. Towards a unified understanding of recent progress in SSL, our work is most similar to a concurrent work (Bardes et al., 2021). Their work is mainly inspired by Barlow-twins (Zbontar et al., 2021) but decomposes its loss into three explicit components. By contrast, our work is motivated to answer the question of how SimSiam prevents
collapse without negative samples. Their work claims that variance component (equivalent to decentering) is an indispensable component for preventing collapse, while we find that de-correlation itself alleviates collapse. Overall, our work helps understand various frameworks in SSL from an unified perspective, which also inspires an investigation of inter-anchor hardness-awareness Zhang et al. (2022) for further bridging the gap between CL and non-CL frameworks in SSL.
5 TOWARDS SIMPLIFYING THE PREDICTOR IN SIMSIAM
Based on our understanding of how SimSiam prevents collapse, we demonstrate that simple components (instead of a non-linear MLP in SimSiam) in the predictor are sufficient for preventing collapse. For example, to achieve dimensional de-correlation, a single FC layer might be sufficient because a single FC layer can realize the interaction among various dimensions. On the other hand, to achieve de-centering, a single bias layer might be sufficient because a bias vector can represent the center vector. Attaching an l2-normalization layer at the end of the encoder, i.e. before the predictor, is found to be critical for achieving the above goal.
Pridictor with FC layers. To learn the dimensional correlation, an FC layer is sufficient theoretically but can be difficult to train in practice. Inspired by the property that Multiple FC layers make the training more stable even though they can be mathematically equivalent to a single FC layer (Bell-Kligler et al., 2019), we adopt two consecutive FC layers which are equivalent to removing the BN and ReLU in the original predictor.
The training can be made more stable if a Tanh layer is applied on the adopted single FC after every iteration. Table 5 shows that they achieve performance comparable to that with a non-linear MLP.
Predictor with a bias layer. A predictor with a single bias layer can be utilized for preventing collapse (see Table 5) and the trained bias vector is found to have a cosine similarity of 0.99 with the center vector (see Table 6). A bias in the MLP predictor also has a high cosine similarity of 0.89, suggesting that it is not a coincidence. A theoretical derivation for justifying such a
high similarity as well as how this single bias layer prevents collapse are discussed in Appendix A.8.
6 CONCLUSION
We point out a hidden flaw in prior works for explaining the success of SimSiam and propose to decompose the representation vector and analyze the decomposed components of extra gradient. We find that its center vector gradient helps prevent collapse via the de-centering effect and its residual gradient achieves de-correlation which also alleviates collapse. Our further analysis reveals that InfoNCE achieve the two effects in a similar manner, which bridges the gap between SimSiam and InfoNCE and contributes to a unified understanding of recent progress in SSL. Towards simplifying the predictor we have also found that a single bias layer is sufficient for preventing collapse.
ACKNOWLEDGEMENT
This work was partly supported by Institute for Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) under grant No.2019-001396 (Development of framework for analyzing, detecting, mitigating of bias in AI model and training data), No.2021-0-01381 (Development of Causal AI through Video Understanding and Reinforcement Learning, and Its Applications to Real Environments) and No.2021-0-02068 (Artificial Intelligence Innovation Hub). During the rebuttal, multiple anonymous reviewers provide valuable advice to significantly improve the quality of this work. Thank you all.
A APPENDIX
A.1 EXPERIMENTAL SETTINGS
Self-supervised encoder training: Below are the settings for self-supervised encoder training. For simplicity, we mainly use the default settings in a popular open library termed solo-learn (da Costa et al., 2021).
Data augmentation and normalization: We use a series of transformations including RandomResizedCrop with scale [0.2, 1.0], bicubic interpolation. ColorJitter (brightness (0.4), contrast (0.4), saturation (0.4), hue (0.1)) is randomly applied with the probability of 0.8. Random gray scale RandomGrayscale is applied with p = 0.2 Horizontal flip is applied with p = 0.5 The images are normalized with the mean (0.4914, 0.4822, 0.4465) and Std (0.247, 0.243, 0.261).
Network architecture and initialization: The backbone architecture is ResNet-18. The projection head contains three fully-connected (FC) layers followed by Batch Norm (BN) and ReLU, for which ReLU in the final FC layer is removed, i.e. FC1+BN+ReLU+FC2+BN+ReLU+FC3+BN . All projection FC layers have 2048 neurons for input, output as well as the hidden dimensions. The predictor head includes two FC layers as follows: FC1 + BN + ReLU + FC2. Input and output of the predictor both have the dimension of 2048, while the hidden dimension is 512. All layers of the network are by default initialized in Pytorch.
Optimizer: SGD optimizer is used for the encoder training. The batch size M is 256 and the learning rate is linearly scaled by the formula lr × M/256 with the base learning rate lr set to 0.5. The schedule for learning rate adopts the cosine decay as SimSiam. Momentum 0.9 and weight decay 1.0 × 10−5 are used for SGD. We use one GPU for each pre-training experiment. Following the practice of SimSiam, the learning rate of the predictor is fixed during the training. We use warmup training for the first 10 epochs. If not specified, by default we train the model for 1000 epochs.
Online linear evaluation: For the online linear revaluation, we also follow the practice in the solo-learn library (da Costa et al., 2021). The frozen features (2048 dimensions) from the training set are extracted (from the self-supervised pre-trained model) to feed into a linear classifier (1 FC layer with the input 2048 and output of 100). The test is performed on the validation set. The learning rate for the linear classifier is 0.1. Overall, we report Top-1 accuracy with the online linear evaluation in this work.
A.2 TWO SUB-PROBLEMS IN AO OF SIMSIAM
In the sub-problem ηt ← arg minη L(θt, η), ηt indicating latent representation of images at step t is actually obtained through ηtx ← ET [ Fθt(T (x)) ] , where they in practice ignore ET [·] and sample only one augmentation T ′, i.e. ηtx ← Fθt(T ′(x)). Conceptually, Chen & He equate the role of predictor to EOA.
A.3 EXPERIMENTAL DETAILS FOR EXPLICIT EOA IN TABLE 1
In the Moving average experiment, we follow the setting in SimSiam (Chen & He, 2021) without predictor. In the Same batch experiment, multiple augmentations, 10 augmentations for instance, are applied on the same image. With multi augmentations, we get the corresponding encoded representation, i.e. zi, i ∈ [1, 10]. We minimize the cosine distance between the first representation z1 and the average of the remaining vectors, i.e. z̄ = 19 ∑10 i=2 zi. The gradient stop is put on the averaged vector. We also experimented with letting the gradient backward through more augmentations, however, they consistently led to collapse.
A.4 EXPERIMENTAL SETUP AND RESULT TREND FOR TABLE 2.
Mirror SimSiam. Here we provide the pseudocode for Mirror SimSiam. In the Mirror SimSiam experiment which relates to Fig. 1 (c). Without taking symmetric loss into account, the pseudocode is shown in Algorithm 1. Taking symmetric loss into account, the pseudocode is shown in Algorithm 2.
Algorithm 1 Pytorch-like Pseudocode: Mirror SimSiam
# f: encoder (backbone + projector) # h: predictor
for x in loader: # load a minibatch x with n samples x_a, x_b = aug(x), aug(x) # augmentation z_a, z_b = f(x_a), f(x_b) # projections
p_b = h(z_b.detach()) # detach z_b but still allowing gradient p_b
L = D_cosine(z_a, p_b) # loss
L.backward() # back-propagate update(f, h) # SGD update
def D_cosine(z, p): # negative cosine similarity z = normalize(z, dim=1) # l2-normalize p = normalize(p, dim=1) # l2-normalize return -(z*p).sum(dim=1).mean()
Algorithm 2 Pytorch-like Pseudocode: Mirror SimSiam
# f: encoder (backbone + projector) # h: predictor
for x in loader: # load a minibatch x with n samples x_a, x_b = aug(x), aug(x) # augmentation z_a, z_b = f(x_a), f(x_b) # projections
p_b = h(z_b.detach()) # detach z_b but still allowing gradient p_b p_a = h(z_a.detach()) # detach z_a but still allowing gradient p_a
L = D_cosine(z_a, p_b)/2 + D_cosine(z_b, p_a)/2 # loss
L.backward() # back-propagate update(f, h) # SGD update
def D_cosine(z, p): # negative cosine similarity z = normalize(z, dim=1) # l2-normalize p = normalize(p, dim=1) # l2-normalize return -(z*p).sum(dim=1).mean()
Symmetric Predictor. To implement the SimSiam with Symmetric Predictor as in Fig. 2 (b), we can just perceive the predictor as part of the new encoder, for which the pseudocode is provided in Algorithm 3. Alternatively, we can additionally train the predictor similarly as that in SimSiam, for which the training involves two losses, one for training the predictor and another for training the new encoder (the corresponding pseudocode is provided in Algorithm 4). Moreover, for the second implementation, we also experiment with another variant that fixes the predictor while optimizing the new encoder and then train the predictor alternatingly. All of them lead to collapse with a similar trend as long as the symmetric predictor is used for training the encoder. For avoiding redundancy, in Fig. 8 we only report the result of the second implementation.
Result trend. The result trend of SimSiam, Naive Siamese, Mirror SimSiam, Symmetric Predictor are shown in Fig. 8. We observe that all architectures lead to collapse except for SimSiam. Mirroe SimSiam was stopped in the middle because a NaN value was returned from the loss.
A.5 EXPERIMENTAL DETAILS FOR INVERSE PREDICTOR.
In the inverse predictor experiment which relates to Fig. 2 (c), we introduce a new predictor which has the same structure as that of the original predictor. The training loss consists of 3 parts: predictor training loss, inverse predictor training and new encoder (old encoder+predictor) training. The new
Algorithm 3 Pytorch-like Pseudocode: Symmetric Predictor
# f: encoder (backbone + projector) # h: predictor
for x in loader: # load a minibatch x with n samples x_a, x_b = aug(x), aug(x) # augmentation z_a, z_b = f(x_a), f(x_b) # projections p_a, p_b = h(z_a), h(z_b) # predictions
L = D(p_a, p_b)/2 + D(p_b, p_a)/2 # loss
L.backward() # back-propagate update(f, h) # SGD update
def D(p, z): # negative cosine similarity z = z.detach() # stop gradient p = normalize(p, dim=1) # l2-normalize z = normalize(z, dim=1) # l2-normalize return -(p*z).sum(dim=1).mean()
Algorithm 4 Pytorch-like Pseudocode: Symmetric Predictor (with additional training on predictor)
# f: encoder (backbone + projector) # h: predictor
for x in loader: # load a minibatch x with n samples x_a, x_b = aug(x), aug(x) # augmentation z_a, z_b = f(x_a), f(x_b) # projections p_a, p_b = h(z_a), h(z_b) # predictions
d_p_a, d_p_b = h(z_a.detach()), h(z_b.detach()) # detached predictor output
# predictor training loss L_pred = D(d_p_a, z_b)/2 + D(d_p_b, z_a)/2
# encoder training loss L_enc = D(p_a, d_p_b)/2 + D(p_b, d_p_a)/2
L = L_pred + L_enc
L.backward() # back-propagate update(f, h) # SGD update
def D(p, z): # negative cosine similarity with detach on z z = z.detach() # stop gradient p = normalize(p, dim=1) # l2-normalize z = normalize(z, dim=1) # l2-normalize return -(p*z).sum(dim=1).mean()
encoder F consists of the old encoder f + predictor h. The practice of gradient stop needs to be considered in the implementation. We provide the pseudocode in Algorithm 5.
Algorithm 5 Pytorch-like Pseudocode: Trainable Inverse Predictor
# f: encoder (backbone + projector) # h: predictor # h_inv: inverse predictor
for x in loader: # load a minibatch x with n samples x_a, x_b = aug(x), aug(x) # augmentation z_a, z_b = f(x_a), f(x_b) # projections p_a, p_b = h(z_a), h(z_b) # predictions
d_p_a, d_p_b = h(z_a.detach()), h(z_b.detach()) # detached predictor output # predictor training loss L_pred = D(d_p_a, z_b)/2 + D(d_p_b, z_a)/2 # to train h
inv_p_a, inv_p_b = h_inv(p_a.detach()), h_inv(p_b.detach()) # to train h_inv # inverse predictor training loss L_inv_pred = D(inv_p_a, z_a)/2 + D(inv_p_b, z_b)/2
# encoder training loss L_enc = D(p_a, h_inv(p_b))/2 + D(p_b, h_inv(p_a))
L = L_pred + L_inv_pred + L_enc
L.backward() # back-propagate update(f, h, h_inv) # SGD update
def D(p, z): # negative cosine similarity with detach on z z = z.detach() # stop gradient p = normalize(p, dim=1) # l2-normalize z = normalize(z, dim=1) # l2-normalize return -(p*z).sum(dim=1).mean()
A.6 REGULARIZATION LOSS
Following Zbontar et al. (2021), we compute covariance regularization loss of encoder output along the mini-batch. The pseudocode for de-correlation loss calculation is put in Algorithm 6.
Algorithm 6 Pytorch-like Pseudocode: De-correlation loss
# Z_a: representation vector # N: batch size # D: the number of dimension for representation vector
Z_a = Z_a - Z_a.mean(dim=0)
cov = Z_a.T @ Z_a / (N-1) diag = torch.eye(D)
loss = cov[˜diag.bool()].pow_(2).sum() / D
A.7 GRADIENT DERIVATION AND TEMPERATURE ANALYSIS FOR INFONCE
With · indicating the cosine similarity between vectors, the InfoNCE loss can be expressed as
LInfoNCE = − log exp(Za ·Zb/τ) exp(Za ·Zb/τ) + ∑N i=1 exp(Za ·Zi/τ)
= − log exp(Za ·Zb/τ)∑N i=0 exp(Za ·Zi/τ) ,
(5)
where N indicates the number of negative samples and Z0 = Zb for simplifying the notation. By treating Za · Zi as the logit in a normal CE loss, we have the corresponding probability for each negative sample as λi =
exp(Za·Zi/τ)∑N i=0 exp(Za·Zi/τ) , where i = 0, 1, 2, ..., N and we have ∑N i=0 λi = 1.
The negative gradient of the InfoNCE on the representation Za is shown as
−∂LInfoNCE ∂Za = 1 τ (1− λ0)Zb − 1 τ N∑ i=1 λiZi
= 1
τ (Zb − N∑ i=0 λiZi)
= 1
τ (Zb − N∑ i=0 λi(oz + ri))
= 1
τ (Zb + (−oz − N∑ i=0 λiri)
∝ Zb + (−oz − N∑ i=0 λiri)
(6)
where 1τ can be adjusted through learning rate and is omitted for simple discussion. With Zb as the basic gradient, Ge = −oz − ∑N i=0 λiri, for which oe = −oz and re = − ∑N i=0 λiri. When the temperature is set to a large value, λi = exp(Za·Zi/τ)∑N i=0 exp(Za·Zi/τ)
, approaches 1N+1 , indicated by a high entropy value (see Fig. 7). InfoNCE will degenerate to a simple contrastive loss, i.e. Lsimple = −Za · Zb + 1N+1 ∑N i=0 Za · Zi , which repulses every negative sample with an equal force. In contrast, a relative smaller temperature will give more relative weight, i.e. larger λ, to negative samples that are more similar to the anchor (Za).
The influence of the temperature on the covariance and accuracy is shown in Fig. 7 (b) and (c). We observe that a higher temperature tends to decrease the effect of de-correlation, indicated by a higher covariance value, which also leads to a performance drop. This verifies our hypothesis regarding on how re in InfoNCE achieves de-correlation because a large temperature causes more balanced weights λi, which is found to alleviate the effect of de-correlation. For the setup, we note that the encoder is trained for 200 epochs with the default setting in Solo-learn for the SimCLR framework.
A.8 THEORETICAL DERIVATION FOR A SINGLE BIAS LAYER
With the cosine similarity loss defined as Eq 7 Eq 8:
cossim(a, b) = a · b√ a2 · b2 , (7)
for which the derived gradient on the vector a is shown as
∂
∂a cossim(a, b) = b1 |a| · |b| − cossim(a, b) · a1 |a|2 . (8)
The above equation is used as a prior for our following derivations. As indicated in the main manuscript, the encoder output za is l2-normalized before feeding into the predictor, thus pa = Za + bp, bp denotes the bias layer in the predictor. The cosine similarity loss (ignoring the symmetry for simplicity) is shown as
Lcosine = −Pa ·Zb = − pa ||pa|| · zb ‖zb‖
(9)
The gradient on pa is derived as
−∂Lcosine ∂pa = zb ‖zb‖ · ‖pa‖ − cossim(Za,Zb) · pa ||pa||2
= 1
‖pa‖ ( zb ‖zb‖ − cossim(Za,Zb) · Pa )
= 1
‖pa‖
( Zb − cossim(Za,Zb) ·
Za + bp ‖pa‖ ) = 1
‖pa‖
( (oz + rb)− cossim(Za,Zb)
‖pa‖ · (oz + ra + bp) ) = 1
‖pa‖ ((oz + rb)−m · (oz + ra + bp))
= 1
‖pa‖ ((1−m)oz −mbp + rb −m · ra) ,
(10)
where m = cossim(Za,Zb)‖pa‖ .
Given that pa = Za + bp, the negative gradient on bp is the same as that on pa as
−∂Lcosine ∂bp = −∂Lcosine ∂pa
= 1
‖pa‖ ((1−m)oz −mbp + rb −m · ra) .
(11)
We assume that the training is stable and the bias layer converges to a certain value when −∂cossim(Za,Zb)∂bp = 0. Thus, the converged bp satisfies the following constraint:
1
‖pa‖ ((1−m)oz −mbp + rb −mra)) = 0
bp = 1−m m oz + 1 m rb − ra.
(12)
With a batch of samples, the average of 1mrb and ra is expected to be close to 0 by the definition of residual vector. Thus, the bias layer vector is expected to converge to:
bp = 1−m m oz. (13)
Rational behind the high similarity between bp and oz . The above theoretical derivation shows that the parameters in the bias layer are excepted to converge to a vector 1−mm oz . This theoretical derivation justifies why the empirically observed cosine similarity between bp and oz is as high as 0.99. Ideally, it should be 1, however, such a small deviation is expected with the training dynamics taken into account.
Rational behind how a single bias layer prevents collapse. Given that pa = Za+bp, the negative gradient on Za is shown as
−∂Lcosine ∂Za = −∂Lcosine ∂pa
= 1
‖pa‖
( Zb − cossim(Za,Zb) ·
Za + bp ‖pa‖ ) = 1
‖pa‖ Zb −
cossim(Za,Zb)
‖pa‖2 Za −
cossim(Za,Zb)
‖pa‖2 bp.
(14)
Here, we highlight that since the loss −Za ·Za = −1 is a constant having zero gradients on the encoder,− cossim(Za,Zb)‖pa‖2 Za can be seen as a dummy term. Considering Eq 13 andm = cossim(Za,Zb) ‖pa‖ ,
we have b = ( ‖pa‖cossim(Za,Zb) − 1)oz . The above equation is equivalent to
−∂Lcosine ∂Za = 1 ‖pa‖ Zb − cossim(Za,Zb) ‖pa‖2 bp
= 1
‖pa‖ Zb −
cossim(Za,Zb)
‖pa‖2 ( ‖pa‖ cossim(Za,Zb) − 1)oz
= 1
‖pa‖ Zb −
1 ‖pa‖ (1− cossim(Za,Zb) ‖pa‖ )oz
∝ Zb − (1− cossim(Za,Zb)
‖pa‖ )oz.
(15)
With Zb as the basic gradient, the extra gradient component Ge = −(1− cossim(Za,Zb)‖pa‖ )oz . Given that pa = Za +bp and ‖Za‖ = 1, thus ‖pa‖ < 1 only when Za is negatively correlated with bp. In practice, however, Za and bp are often positively correlated to some extent due to their shared center vector component. In other words, ‖pa‖ > 1. Moreover, cossim(Za,Zb) is smaller than 1, thus −(1− cossim(Za,Zb)‖pa‖ ) < 0, suggesting Ge consists of negative oz with the effect of de-centerization. This above derivation justifies the rationale why a single bias layer can help alleviate collapse.
B DISCUSSION: DOES BN HELP AVOID COLLAPSE?
To our knowledge, our work is the first to revisit and refute the explanatory claims in (Chen & He, 2021). Several works, however, have attempted to demystify the success of BYOL (Grill et al., 2020), a close variant of SimSiam. The success has been ascribed to BN in (Fetterman & Albrecht, 2020), however, (Richemond et al., 2020) refutes their claim. Since the role of intermediate BNs is ascribed to stabilize training (Richemond et al., 2020; Chen & He, 2021), we only discuss the final BN in the SimSiam encoder. Note that with our Conjecture1, the final BN that removes the mean of representation vector is supposed to have de-centering effect. BY default SimSiam has such a BN at the end of its encoder, however, it still collapses with the predictor and stop gradient. Why would such a BN not prevent collapse in this case? Interestingly, we observe that such BN can help alleviate collapse with a simple MSE loss (see Fig. 9), however, its performance is is inferior to the cosine loss-based SimSiam (with predictor and stop gradient) due to the lack of the de-correlation effect in SimSiam. Note that the cosine loss is in essence equivalent to a MSE loss on the l2normalized vectors. This phenomenon can be interpreted as that the l2-normalization causes another mean after the BN removes it. Thus, with such l2-normalization in the MSE loss, i.e. adopting the default cosine loss, it is important to remove the oe from the optimization target. The results with the loss of −Za · sg(Zb + oe) in Table 3 show that this indeed prevents collapse and verifies the above interpretation. | 1. What is the main contribution of the paper regarding SimSiam's ability to avoid collapse?
2. What are the strengths of the paper, particularly in its empirical evidence and connections to contrastive methods?
3. Do you have any questions or concerns regarding the paper's notation, particularly in the analysis of negative examples and predictors?
4. How does the reviewer assess the significance and importance of the research topic, as well as the paper's potential impact on understanding existing methods?
5. Are there any suggestions for improving the paper's organization, such as making the second part more self-contained and moving the SimSiam explanation revisit part to the appendix?
6. Is there any discussion on the limitations of the paper's findings, specifically regarding the placement of the stop gradient and the potential for collapse? | Summary Of The Paper
Review | Summary Of The Paper
As the title suggests, the paper does a detailed investigation of how SimSiam avoids collapse without negative training examples. The key idea is to decompose the original vector into a center vector component and a residual vector component. The center vector cannot be too large (otherwise it is indicating collapse). The high-level idea is that the designs in SimSiam (and contrastive frameworks that have InfoNCE loss) are mainly to prevent the center vector from getting too large. There are conjectures (verified empirically) about the relationship between the gradient w.r.t. the center vector ands the gradient w.r.t. the residual vector, and with these the paper finds that for SimSiam, the predictor is important for preventing collapse, particularly by doing de-correlation among features.
Review
The topic of the research is of significance, as it is important to understand why designs like SimSiam/BYOL does not collapse. The paper attempts to get more insights of it with empirical evidence, and also shows the potential of connecting to contrastive methods (via InfoNCE loss). This is a great step toward better understanding of existing methods.
I like the style of providing conjectures and show results that empirically verifying them. The experimental designs are quite solid and insightful.
Paper writing: I am a bit lost when reading the second part of the paper about the center vector and residual ones. I think it is good to have a clear definition for X in "centering w.r.t. X". Is X the entire image set and all possible augmentations? Is X just for the current image? Or is X for the current batch? Also I think this center vector must be approximate (e.g., taking multiple crops to get the average), not fully exact. So it would be good to have some notational clarifications on this. Particularly when introducing the analysis on negative examples and predictor, the notation becomes quite messy and hard to follow.
About Fig 1 (c): if the stop gradient is applied this way, then the predictor is NOT going to be trained at all and would naturally leads to collapse to me. I think the better way is to have the stop gradient between the predictor and the encoder (so at the place of z_b). At least this way all the parameters in the framework are being trained. I do think even this placement will lead to collapse.
For the part that revisits SimSiam's explanation: while it is great to have this part, to me it is not the essential part of the paper (the second part is). So I think it would be good to make the second part more self-contained, with richer set of experiments, and move the SimSiam explanation revisit part to appendix.
Minor for the text that leads to Eq (2): I think symmetric loss is not the essential reason why SimSiam does not collapse, so it would be good to just focus on the asymmetric architecture (like SimSiam's teaser figure) and loss for the current analysis. It would also reduce the confusion.
Experimentation wise, it is good that the current paper provides a good explanation, but performance wise the theoretical insights have not led to better results. It is less important for a paper like this, but calling the last section "SimSiam++" but without numbers/speeds that actually outperform SimSiam is strange to me. |
ICLR | Title
How Does SimSiam Avoid Collapse Without Negative Samples? A Unified Understanding with Self-supervised Contrastive Learning
Abstract
To avoid collapse in self-supervised learning (SSL), a contrastive loss is widely used but often requires a large number of negative samples. Without negative samples yet achieving competitive performance, a recent work (Chen & He, 2021) has attracted significant attention for providing a minimalist simple Siamese (SimSiam) method to avoid collapse. However, the reason for how it avoids collapse without negative samples remains not fully clear and our investigation starts by revisiting the explanatory claims in the original SimSiam. After refuting their claims, we introduce vector decomposition for analyzing the collapse based on the gradient analysis of the l2-normalized representation vector. This yields a unified perspective on how negative samples and SimSiam alleviate collapse. Such a unified perspective comes timely for understanding the recent progress in SSL.
1 INTRODUCTION
Beyond the success of NLP (Lan et al., 2020; Radford et al., 2019; Devlin et al., 2019; Su et al., 2020; Nie et al., 2020), self-supervised learning (SSL) has also shown its potential in the field of vision tasks (Li et al., 2021; Chen et al., 2021; El-Nouby et al., 2021). Without the ground-truth label, the core of most SSL methods lies in learning an encoder with augmentation-invariant representation (Bachman et al., 2019; He et al., 2020; Chen et al., 2020a; Caron et al., 2020; Grill et al., 2020). Specifically, they often minimize the representation distance between two positive samples, i.e. two augmented views of the same image, based on a Siamese network architecture (Bromley et al., 1993). It is widely known that for such Siamese networks there exists a degenerate solution, i.e. all outputs “collapsing” to an undesired constant (Chen et al., 2020a; Chen & He, 2021). Early works have attributed the collapse to lacking a repulsive component in the optimization goal and adopted contrastive learning (CL) with negative samples, i.e. views of different samples, to alleviate this problem. Introducing momentum into the target encoder, BYOL shows that Siamese architectures can be trained with only positive pairs. More recently, SimSiam (Chen & He, 2021) has caught great attention by further simplifying BYOL by removing the momentum encoder, which has been seen as a major milestone achievement in SSL for providing a minimalist method for achieving competitive performance. However, more investigation is required for the following question:
How does SimSiam avoid collapse without negative samples?
Our investigation starts with revisiting the explanatory claims in the original SimSiam paper (Chen & He, 2021). Notably, two components, i.e. stop gradient and predictor, are essential for the success of SimSiam (Chen & He, 2021). The reason has been mainly attributed to the stop gradient (Chen & He, 2021) by hypothesizing that it implicitly involves two sets of variables and SimSiam behaves like alternating between optimizing each set. Chen & He argue that the predictor h is helpful in SimSiam because h fills the gap to approximate expectation over augmentations (EOA).
Unfortunately, the above explanatory claims are found to be flawed due to reversing the two paths with and without gradient (see Sec. 2.2). This motivates us to find an alternative explanation, for which we introduce a simple yet intuitive framework for facilitating the analysis of collapse in SSL.
∗equal contribution
Specifically, we propose to decompose a representation vector into center and residual components. This decomposition facilitates understanding which gradient component is beneficial for avoiding collapse. Under this framework, we show that a basic Siamese architecture cannot prevent collapse, for which an extra gradient component needs to be introduced. With SimSiam interpreted as processing the optimization target with an inverse predictor, the analysis of its extra gradient shows that (a) its center vector helps prevent collapse via the de-centering effect; (b) its residual vector achieves dimensional de-correlation which also alleviates collapse.
Moreover, under the same gradient decomposition, we find that the extra gradient caused by negative samples in InfoNCE (He et al., 2019; Chen et al., 2020b;a; Tian et al., 2019; Khosla et al., 2020) also achieves de-centering and de-correlation in the same manner. It contributes to a unified understanding on various frameworks in SSL, which also inspires the investigation of hardnessawareness Wang & Liu (2021) from the inter-anchor perspective Zhang et al. (2022) for further bridging the gap between CL and non-CL frameworks in SSL. Finally, simplifying the predictor for more explainable SimSiam, we show that a single bias layer is sufficient for preventing collapse.
The basic experimental settings for our analysis are detailed in Appendix A.1 with a more specific setup discussed in the context. Overall, our work is the first attempt for performing a comprehensive study on how SimSiam avoids collapse without negative samples. Several works, however, have attempted to demystify the success of BYOL (Grill et al., 2020), a close variant of SimSiam. A technical report (Fetterman & Albrecht, 2020) has suggested the importance of batch normalization (BN) in BYOL for its success, however, a recent work (Richemond et al., 2020) refutes their claim by showing BYOL works without BN, which is discussed in Appendix B.
2 REVISITING SIMSIAM AND ITS EXPLANATORY CLAIMS
l2-normalized vector and optimization goal. SSL trains an encoder f for learning discriminative representation and we denote such representation as a vector z, i.e. f(x) = z where x is a certain input. For the augmentation-invariant representation, a straightforward goal is to minimize the distance between the representations of two positive samples, i.e. augmented views of the same image, for which mean squared error (MSE) is a default choice. To avoid scale ambiguity, the vectors are often l2-normalized, i.e. Z = z/||z|| (Chen & He, 2021), before calculating the MSE:
LMSE = (Za −Zb)2/2− 1 = −Za ·Zb = Lcosine, (1) which shows the equivalence of a normalized MSE loss to the cosine loss (Grill et al., 2020).
Collapse in SSL and solution of SimSiam. Based on a Siamese architecture, the loss in Eq 1 causes the collapse, i.e. f always outputs a constant regardless of the input variance. We refer to this Siamese architecture with loss Eq 1 as Naive Siamese in the remainder of paper. Contrastive loss with negative samples is a widely used solution (Chen et al., 2020a). Without using negative samples, SimSiam solves the collapse problem via predictor and stop gradient, based on which the encoder is optimized with a symmetric loss:
LSimSiam = −(Pa · sg(Zb) + Pb · sg(Za)), (2) where sg(·) is stop gradient and P is the output of predictor h, i.e. p = h(z) and P = p/||p||.
2.1 REVISING EXPLANATORY CLAIMS IN SIMSIAM
Interpreting stop gradient as AO. Chen & He hypothesize that the stop gradient in Eq 2 is an implementation of Alternating between the Optimization of two sub-problems, which is denoted as AO. Specifically, with the loss considered as L(θ, η) = Ex,T [∥∥Fθ(T (x)) − ηx∥∥2], the optimization objective minθ,η L(θ, η) can be solved by alternating ηt ← arg minη L(θt, η) and θt+1 ← arg minθ L(θ, ηt). It is acknowledged that this hypothesis does not fully explain why the collapse is prevented (Chen & He, 2021). Nonetheless, they mainly attribute SimSiam success to the stop gradient with the interpretation that AO might make it difficult to approach a constant ∀x. Interpreting predictor as EOA. The AO problem (Chen & He, 2021) is formulated independent of predictor h, for which they believe that the usage of predictor h is related to approximating EOA for filling the gap of ignoring ET [·] in a sub-problem of AO. The approximation of ET [·] is summarized
in Appendix A.2. Chen & He support their interpretation by proof-of-concept experiments. Specifically, they show that updating ηx with a moving-average ηtx ← m ∗ ηtx + (1 −m) ∗ Fθt(T ′(x)) can help prevent collapse without predictor (see Fig. 1 (b)). Given that the training completely fails when the predictor and moving average are both removed, at first sight, their reasoning seems valid.
2.2 DOES THE PREDICTOR FILL THE GAP TO APPROXIMATE EOA?
Reasoning flaw. Considering the stop gradient, we divide the framework into two sub-models with different paths and term them Gradient Path (GP) and Stop Gradient Path (SGP). For SimSiam, only the sub-model with GP includes the predictor (see Fig. 1 (a)). We point out that their reasoning flaw of predictor analysis lies in the reverse of GP and SGP. By default, the moving-average sub-model, as shown in Fig. 1 (b), is on the same side as SGP. Note that Fig. 1 (b) is conceptually similar to Fig. 1 (c) instead of Fig. 1 (a). It is worth mentioning that the Mirror SimSiam in Fig. 1 (c) is what stop gradient in the original SimSiam avoids. Therefore, it is problematic to perceive h as EOA.
New Figure 1
Explicit EOA does not prevent collapse. (Chen & He, 2021) points out that “in practice, it would be unrealistic to actually compute the expectation ET [·]. But it may be possible for a neural network (e.g., the preditor h) to learn to predict the expectation, while the sampling of T is implicitly distributed across multiple epochs.” If implicitly sampling across multiple epochs is beneficial, explicitly sampling sufficient large N augmentations in a batch with the latest model would be more beneficial to approximate ET [·]. However, Table 1 shows that the collapse still occurs and suggests that the equivalence between predictor and EOA does not hold.
2.3 ASYMMETRIC INTERPRETATION OF PREDICTOR WITH STOP GRADIENT IN SIMSIAM
Symmetric Predictor does not prevent collapse. The difference between Naive Siamese and Simsiam lies in whether the gradient in backward propagation flows through a predictor, however, we show that this propagation helps avoid collapse only when the predictor is not included in the SGP path. With h being trained the same as Eq 2, we optimize the encoder f through replacing the Z in Eq 2 with P . The results in Table. 2 show that it still leads to collapse. Actually, this is well expected by perceiving h to be part of the new encoder F , i.e. p = F (x) = h(f(x)). In other words, the symmetric architectures with and without predictor h both lead to collapse.
Predictor with stop gradient is asymmetric. Clearly, how SimSiam avoids collapse lies in its asymmetric architecture, i.e. one path with h and the other without h. Under this asymmetric architecture, the role of stop gradient is to only allow the path with predictor to be optimized with the encoder output as the target, not vice versa. In other words, the SimSiam avoids collapse by excluding Mirror SimSiam (Fig. 1 (c)) which has a loss (mirror-like Eq 2) asLMirror = −(Pa ·Zb+Pb ·Za), where stop gradient is put on the input of h, i.e. pa = h(sg[za]) and pb = h(sg[zb]).
Predictor vs. inverse predictor. We interpret h as a function mapping from z to p, and introduce a conceptual inverse mapping h−1, i.e. z = h−1(p). Here, as shown in Table 2, SimSiam with symmetric predictor (Fig. 2 (b)) leads to collapse, while SimSiam (Fig. 1 (a)) avoids collapse. With the conceptual h−1, we interpret Fig. 1 (a) the same as Fig. 2 (c) which differs from Fig. 2 (b) via changing the optimization target from pb to zb, i.e. zb = h−1(pb). This interpretation
suggests that the collapse can be avoided by processing the optimization target with h−1. By contrast, Fig. 1 (c) and Fig. 2 (a) both lead to collapse, suggesting that processing the optimization target with h is not beneficial for preventing collapse. Overall, asymmetry alone does not guarantee collapse avoidance, which requires the optimization target to be processed by h−1 not h.
Trainable inverse predictor and its implication on EOA. In the above, we propose a conceptual inverse predictor h−1 in Fig. 2 (c), however, it remains yet unknown whether such an inverse predictor is experimentally trainable. A detailed setup for this investigation is reported in Appendix A.5. The results in Fig. 3 show that a learnable h−1 leads to slightly inferior performance, which is expected because h−1 cannot make the trainable inverse predictor output z∗b completely the same as zb. Note that it would be equivalent to SimSiam if z∗b = zb. Despite a slight performance drop, the results confirm that h−1 is trainable. The fact that h−1 is trainable provides additional evidence that the role h plays in SimSiam is not EOA
because theoretically h−1 cannot restore a random augmentation T ′ from an expectation p, where p = h(z) = ET [ Fθt(T (x)) ] .
3 VECTOR DECOMPOSITION FOR UNDERSTANDING COLLAPSE
By default, InfoNCE (Chen et al., 2020a) and SimSiam (Chen & He, 2021) both adopt l2normalization in their loss for avoiding scale ambiguity. We treat the l2-normalized vector, i.e. Z, as the encoder output, which significantly simplifies gradient derivation and the following analysis.
Vector decomposition. For the purpose of analysis, we propose to decompose Z into two parts, Z = o + r, where o, r denote center vector and residual vector respectively. Specifically, the center vector o is defined as an average of Z over the whole representation space oz = E[Z]. However, we approximate it with all vectors in current mini-batch, i.e. oz = 1M ∑M m=1 Zm, where M is the mini-batch size. We define the residual vector r as the residual part of Z, i.e. r = Z − oz .
3.1 COLLAPSE FROM THE VECTOR PERSPECTIVE
Collapse: from result to cause. A Naive Siamese is well expected to collapse since the loss is designed to minimize the distance between positive samples, for which a constant constitutes an optimal solution to minimize such loss. When the collapse occurs, ∀i,Zi = 1M ∑M m=1 Zm = oz , where i denotes a random sample index, which shows the constant vector is oz in this case. This interpretation only suggests a possibility that a dominant o can be one of the viable solutions, while the optimization, such as SimSiam, might still lead to a non-collapse solution. This merely describes o as the consequence of the collapse, and our work investigates the cause of such collapse through analyzing the influence of individual gradient components, i.e. o and r during training.
Competition between o and r. Complementary to the Standard Deviation (Std) (Chen & He, 2021), for indicating collapse, we introduce the ratio of o in z, i.e. mo = ||o||/||z||, where || ∗ || is the L2 norm. Similarly, the ratio of r in z is defined as mr = ||r||/||z||. When collapse happens, i.e. all vectors Z are close to the center vector o, mo approaches 1 and mr approaches 0, which is not desirable for SSL. A desirable case would be a relatively small mo and a relatively large mr, suggesting a relatively small (large) contribution of o (r) in each Z. We interpret the cause of collapse as a competition between o and r where o dominates over r, i.e. mo mr. For Eq 1, the derived negative gradient on Za (ignoring Zb for simplicity due to symmetry) is shown as:
Gcosine = − ∂LMSE ∂Za = Zb −Za ⇐⇒ − ∂Lcosine ∂Za = Zb, (3)
where the gradient component Za is a dummy term because the loss −Za · Za = −1 is a constant having zero gradient on the encoder f .
Conjecture1. With Za = oz + ra, we conjecture that the gradient component of oz is expected to update the encoder to boost the center vector thus increasemo, while the gradient component of ra is expected to behave in the opposite direction to increase mr. A random gradient component is expected to have a relatively small influence.
To verify the above conjecture, we revisit the dummy gradient term Za. We design loss −Za · sg(oz) and −Za · sg(Za − oz) to show the influence of gradient component o and ra respectively. The results in Fig. 4 show that the gradient component oz has the effect of increasingmo while decreasingmr. On the
contrary, ra helps increase mr while decreasing mo. Overall, the results verify Conjecture1.
3.2 EXTRA GRADIENT COMPONENT FOR ALLEVIATING COLLAPSE
Revisit collapse in a symmetric architecture. Based on Conjecture1, here, we provide an intuitive interpretation on why a symmetric Siamese architecture, such as Fig. 2 (a) and (b), cannot be trained without collapse. Take Fig. 2 (a) as example, the gradient in Eq 3 can be interpreted as two equivalent forms, from which we choose Zb−Za = (oz+rb)−(oz+ra) = rb−ra. Since rb comes from the same positive sample as ra, it is expected that rb also increases mr, however, this effect is expected to be smaller than that of ra, thus causing collapse.
Basic gradient and Extra gradient components. The negative gradient on Za in Fig. 2 (a) is derived as Zb, while that on Pa in Fig. 2 (b) is derived as Pb. We perceive Zb and Pb in these basic Siamese architectures as the Basic Gradient. Our above interpretation shows that such basic components cannot prevent collapse, for which an Extra Gradient component, denoted as Ge, needs to be introduced to break the symmetry. As the term suggests, Ge is defined as a gradient term that is relative to the basic gradient in a basic Siamese architecture. For example, negative samples can be introduced to Naive Siamese (Fig. 2 (a)) for preventing collapse, where the extra gradient caused by negative samples can thus be perceived as Ge with Zb as the basic gradient. Similarly, we can also disentangle the negative gradient on Pa in SimSiam (Fig. 1 (a)), i.e. Zb, into a basic gradient (which is Pb) and Ge which is derived as Zb −Pb (note that Zb = Pb + Ge). We analyze how Ge prevents collapse via studying the independent roles of its center vector oe and residual vector re.
3.3 A TOY EXAMPLE EXPERIMENT WITH NEGATIVE SAMPLE
Which repulsive component helps avoid collapse? Existing works often attribute the collapse in Naive Siamese to lacking a repulsive part during the optimization. This explanation has motivated previous works to adopt contrastive learning, i.e. attracting the positive samples while repulsing the negative samples. We experiment with a simple triplet loss1, Ltri = −Za·sg(Zb −Zn), where Zn indicates the representation of a Negative sample. The derived negative gradient on Za is Zb −Zn, where Zb is the basic gradient component and thus Ge = −Zn in this setup. For a sample representation, what determines it as a positive sample for attracting or a negative sample for repulsing is the residual component, thus it might be tempting to interpret that re is the key component of repulsive part that avoids the collapse. However, the results in Table 3 show that the component beneficial for preventing collapse inside Ge is oe instead of re. Specifically, to explore the individual influence of oe and re in the Ge, we design two experiments by removing one component while keeping the other one. In the first experiment, we remove the re in Ge while keeping the oe. By contrast, the oe is removed while keeping the re in the second experiment. In contrast to what existing explanations may expect, we find that the residual component oe prevents collapses. With Conjecture1, a gradient component alleviates collapse if it has negative center vector. In this setup, oe = −oz , thus oe has the de-centering role for preventing collapse. On the contrary, re does not prevent collapse and keeping re even decreases the performance (36.21% < 47.41%). Since the negative sample is randomly chosen, re just behaves like a random noise on the optimization to decrease performance.
3.4 DECOMPOSED GRADIENT ANALYSIS IN SIMSIAM
It is challenging to derive the gradient on the encoder output in SimSiam due to a nonlinear MLP module in h. The negative gradient on Pa for LSimSiam in Eq 2 can be derived as
GSimSiam = − ∂LSimSiam
∂Pa = Zb = Pb + (Zb − Pb) = Pb + Ge, (4)
oe re Collapse Top-1 (%) X X × 66.62 X × × 48.08 × X × 66.15 × × X 1
Table 4: Gradient component analysis for SimSiam.
where Ge indicates the aforementioned extra gradient component. To investigate the influence of oe and re on the collapse, similar to the analysis with the toy example experiment in Sec. 3.3, we design the experiment by removing one component while keeping the other. The results are reported in Table 4. As expected, the model collapses when both components in Ge are removed and the best performance is achieved when both components are kept. Interestingly, the model does not collapse when
either oe or re is kept. To start, we analyze how oe affects the collapse based on Conjecture1.
How oe alleviates collapse in SimSiam. Here, op is used to denote the center vector of P to differentiate from the above introduced oz for denoting that of Z. In this setup Ge = Zb − Pb, thus the residual gradient component is derived to be oe = oz − op. With Conjecture1, it is well expected that oe helps prevent collapse if oe contains negative op since the analyzed vector is Pa. To determine the amount of component of op existing in oe, we measure the cosine similarity between oe − ηpop and op for a wide range of ηp. The results in Fig. 5 (a) show that their cosine similarity is zero when ηp is around −0.5, suggesting oe has ≈ −0.5op. With Conjecture1, this negative ηp explains why SimSiam avoids collapse from the perspective of de-centering.
How oe causes collapse in Mirror SimSiam. As mentioned above, the collapse occurs in Mirror SimSiam, which can also be explained by analyzing its oe. Here, oe = op − oz , for which we evaluate the amount of component oz existing in oe via reporting the similarity between oe − ηzoz
1Note that the triplet loss here does not have clipping form as in Schroff et al. (2015) for simplicity.
and oz . The results in Fig. 5 (a) show that their cosine similarity is zero when ηz is set to around 0.2. This positive ηz explains why Fig. 1(c) causes collapse from the perspective of de-centering.
Overall, we find that processing the optimization target with h−1, as in Fig. 2 (c), alleviates collapse (ηp ≈ −0.5), while processing it with h, as in Fig. 1(c), actually strengthens the collapse (ηz ≈ 0.2). In other words, via the analysis of oe, our results help explain how SimSiam avoids collapse as well as how Mirror SimSiam causes collapse from a straightforward de-centering perspective.
Relation to prior works. Motivated from preventing the collapse to a constant, multiple prior works, such as W-MSE (Ermolov et al., 2021), Barlow-twins (Zbontar et al., 2021), DINO (Caron et al., 2021), explicitly adopt de-centering to prevent collapse. Despite various motivations, we find that they all implicitly introduce an oe that contains a negative center vector. The success of their approaches aligns well with our Conjecture1 as well as our above empirical results. Based on our findings, we argue that the effect of de-centering can be perceived as oe having a negative center vector. With this interpretation, we are the first to demonstrate that how SimSiam with predictor and stop gradient avoids collapse can be explained from the perspective of de-centering.
Beyond de-centering for avoiding collapse. In the toy example experiment in Sec. 3.3, re is found to be not beneficial for preventing collapse and keeping re even decreases the performance. Interestingly, as shown in Table 4, we find that re alone is sufficient for preventing collapse and achieves comparable performance as Ge. This can be explained from the perspective of dimensional de-correlation, which will be discussed in Sec. 3.5.
3.5 DIMENSIONAL DE-CORRELATION HELPS PREVENT COLLAPSE
Conjecture2 and motivation. We conjecture that dimensional de-correlation increases mr for preventing collapse. The motivation is straightforward as follows. The dimensional correlation would be minimum if only a single dimension has a very high value for every individual class and the dimension changes for different classes. In another extreme case, when all the dimensions have the same values, equivalent to having a single dimension, which already collapses by itself in the sense of losing representation capacity. Conceptually, re has no direct influence on the center vector, thus we interpret that re prevents collapse through increasing mr.
To verify the above conjecture, we train SimSiam normally with the loss in Eq 2 and train for several epochs with the loss in Eq 1 for intentionally decreasing the mr to close to zero. Then, we train the loss with only a correlation regularization term, which is detailed in Appendix A.6. The results in Fig. 5 (b) show that this regularization term increases mr at a very fast rate.
Dimensional de-correlation in SimSiam. Assuming h only has a single FC layer to exclude the influence of oe, the weights in FC are expected to learn the correlation between different dimensions for the encoder output. This interpretation echos well with the finding that the eigenspace of hweight aligns well with that of correlation matrix (Tian et al., 2021). In essence, the h is trained to minimize the cosine similarity between h(za) and I(zb), where I is identity mapping. Thus, h that learns the correlation is optimized close to I , which is conceptually equivalent to optimizing with the goal of de-correlation for Z. As shown in Table 4, for SimSiam, re alone also prevents collapse, which
is attributed to the de-correlation effect since re has no de-centering effect. We observe from Fig. 6 that except in the first few epochs, SimSiam decreases the covariance during the whole training. Fig. 6 also reports the results for InfoNCE which will be discussed in Sec. 4.
4 TOWARDS A UNIFIED UNDERSTANDING OF RECENT PROGRESS IN SSL
De-centering and de-correlation in InfoNCE. InfoNCE loss is a default choice in multiple seminal contrastive learning frameworks (Sohn, 2016; Wu et al., 2018; Oord et al., 2018; Wang & Liu, 2021). The derived negative gradient of InfoNCE on Za is proportional to Zb + ∑N i=0−λiZi, where λi = exp(Za·Zi/τ)∑N i=0 exp(Za·Zi/τ) , and Z0 = Zb for notation simplicity. See Appendix A.7 for the detailed
derivation. The extra gradient component Ge = ∑N i=0−λiZi = −oz − ∑N i=0 λiri, for which
oe = −oz and re = − ∑N i=0 λiri. Clearly, oe contains negative oz as de-centering for avoiding collapse, which is equivalent to the toy example in Sec. 3.3 when the re is removed. Regarding re, the main difference between Ltri in the toy example and InfoNCE is that the latter exploits a batch of negative samples instead of a random one. λi is proportional to exp(Za·Zi), indicating that a large weight is put on the negative sample when it is more similar to the anchor Za, for which, intuitively, its dimensional values tend to have a high correlation with Za. Thus, re containing such negative representation with a high weight tends to decrease dimensional correlation. To verify this intuition, we measure the cosine similarity between re and the gradient on Za induced by a correlation regularization loss. The results in Fig. 5 (c) show that their gradient similarity is high for a wide range of temperature values, especially when τ is around 0.1 or 0.2, suggesting re achieves similar role as an explicit regularization loss for performing de-correlation. Replacing re with oe leads to a low cosine similarity, which is expected because oe has no de-correlation effect.
The results of InfoNCE in Fig. 6 resembles that of SimSiam in terms of the overall trend. For example, InfoNCE also decreases the covariance value during training. Moreover, we also report the results of InfoNCE where re is removed for excluding the de-correlation effect. Removing re from the InfoNCE loss leads to a high covariance value during the whole training. Removing re also leads to a significant performance drop, which echos with the finding in (Bardes et al., 2021) that dimensional de-correlation is essential for competitive performance. Regarding how re in InfoNCE achieves de-correlation, formally, we hypothesize that the de-correlation effect in InfoNCE arises from the biased weights (λi) on negative samples. This hypothesis is corroborated by the temperature analysis in Fig. 7. We find that a higher temperature makes the weight distribution of λi more balanced indicated a higher entropy of λi, which echos with the finding in (Wang & Liu, 2021). Moreover, we observe that a higher temperature also tends to increase the covariance value. Overall, with temperature as the control variable, we find that more balanced weights among negative samples decrease the de-correlation effect, which constitutes an evidence for our hypothesis.
Unifying SimSiam and InfoNCE. At first sight, there is no conceptual similarity between SimSiam and InfoNCE, and this is why the community is intrigued by the success of SimSiam without negative samples. Through decomposing the Ge into oe and re, we find that for both, their oe plays the role of de-centering and their re behaves like de-correlation. In this sense, we bring two seemingly irrelevant frameworks into a unified perspective with disentangled de-centering and de-correlation.
Beyond SimSiam and InfoNCE. In SSL, there is a trend of performing explicit manipulation of de-centering and de-correlation, for which W-MSE (Ermolov et al., 2021), Barlow-twins (Zbontar et al., 2021), DINO (Caron et al., 2021) are three representative works. They often achieve performance comparable to those with InfoNCE or SimSiam. Towards a unified understanding of recent progress in SSL, our work is most similar to a concurrent work (Bardes et al., 2021). Their work is mainly inspired by Barlow-twins (Zbontar et al., 2021) but decomposes its loss into three explicit components. By contrast, our work is motivated to answer the question of how SimSiam prevents
collapse without negative samples. Their work claims that variance component (equivalent to decentering) is an indispensable component for preventing collapse, while we find that de-correlation itself alleviates collapse. Overall, our work helps understand various frameworks in SSL from an unified perspective, which also inspires an investigation of inter-anchor hardness-awareness Zhang et al. (2022) for further bridging the gap between CL and non-CL frameworks in SSL.
5 TOWARDS SIMPLIFYING THE PREDICTOR IN SIMSIAM
Based on our understanding of how SimSiam prevents collapse, we demonstrate that simple components (instead of a non-linear MLP in SimSiam) in the predictor are sufficient for preventing collapse. For example, to achieve dimensional de-correlation, a single FC layer might be sufficient because a single FC layer can realize the interaction among various dimensions. On the other hand, to achieve de-centering, a single bias layer might be sufficient because a bias vector can represent the center vector. Attaching an l2-normalization layer at the end of the encoder, i.e. before the predictor, is found to be critical for achieving the above goal.
Pridictor with FC layers. To learn the dimensional correlation, an FC layer is sufficient theoretically but can be difficult to train in practice. Inspired by the property that Multiple FC layers make the training more stable even though they can be mathematically equivalent to a single FC layer (Bell-Kligler et al., 2019), we adopt two consecutive FC layers which are equivalent to removing the BN and ReLU in the original predictor.
The training can be made more stable if a Tanh layer is applied on the adopted single FC after every iteration. Table 5 shows that they achieve performance comparable to that with a non-linear MLP.
Predictor with a bias layer. A predictor with a single bias layer can be utilized for preventing collapse (see Table 5) and the trained bias vector is found to have a cosine similarity of 0.99 with the center vector (see Table 6). A bias in the MLP predictor also has a high cosine similarity of 0.89, suggesting that it is not a coincidence. A theoretical derivation for justifying such a
high similarity as well as how this single bias layer prevents collapse are discussed in Appendix A.8.
6 CONCLUSION
We point out a hidden flaw in prior works for explaining the success of SimSiam and propose to decompose the representation vector and analyze the decomposed components of extra gradient. We find that its center vector gradient helps prevent collapse via the de-centering effect and its residual gradient achieves de-correlation which also alleviates collapse. Our further analysis reveals that InfoNCE achieve the two effects in a similar manner, which bridges the gap between SimSiam and InfoNCE and contributes to a unified understanding of recent progress in SSL. Towards simplifying the predictor we have also found that a single bias layer is sufficient for preventing collapse.
ACKNOWLEDGEMENT
This work was partly supported by Institute for Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) under grant No.2019-001396 (Development of framework for analyzing, detecting, mitigating of bias in AI model and training data), No.2021-0-01381 (Development of Causal AI through Video Understanding and Reinforcement Learning, and Its Applications to Real Environments) and No.2021-0-02068 (Artificial Intelligence Innovation Hub). During the rebuttal, multiple anonymous reviewers provide valuable advice to significantly improve the quality of this work. Thank you all.
A APPENDIX
A.1 EXPERIMENTAL SETTINGS
Self-supervised encoder training: Below are the settings for self-supervised encoder training. For simplicity, we mainly use the default settings in a popular open library termed solo-learn (da Costa et al., 2021).
Data augmentation and normalization: We use a series of transformations including RandomResizedCrop with scale [0.2, 1.0], bicubic interpolation. ColorJitter (brightness (0.4), contrast (0.4), saturation (0.4), hue (0.1)) is randomly applied with the probability of 0.8. Random gray scale RandomGrayscale is applied with p = 0.2 Horizontal flip is applied with p = 0.5 The images are normalized with the mean (0.4914, 0.4822, 0.4465) and Std (0.247, 0.243, 0.261).
Network architecture and initialization: The backbone architecture is ResNet-18. The projection head contains three fully-connected (FC) layers followed by Batch Norm (BN) and ReLU, for which ReLU in the final FC layer is removed, i.e. FC1+BN+ReLU+FC2+BN+ReLU+FC3+BN . All projection FC layers have 2048 neurons for input, output as well as the hidden dimensions. The predictor head includes two FC layers as follows: FC1 + BN + ReLU + FC2. Input and output of the predictor both have the dimension of 2048, while the hidden dimension is 512. All layers of the network are by default initialized in Pytorch.
Optimizer: SGD optimizer is used for the encoder training. The batch size M is 256 and the learning rate is linearly scaled by the formula lr × M/256 with the base learning rate lr set to 0.5. The schedule for learning rate adopts the cosine decay as SimSiam. Momentum 0.9 and weight decay 1.0 × 10−5 are used for SGD. We use one GPU for each pre-training experiment. Following the practice of SimSiam, the learning rate of the predictor is fixed during the training. We use warmup training for the first 10 epochs. If not specified, by default we train the model for 1000 epochs.
Online linear evaluation: For the online linear revaluation, we also follow the practice in the solo-learn library (da Costa et al., 2021). The frozen features (2048 dimensions) from the training set are extracted (from the self-supervised pre-trained model) to feed into a linear classifier (1 FC layer with the input 2048 and output of 100). The test is performed on the validation set. The learning rate for the linear classifier is 0.1. Overall, we report Top-1 accuracy with the online linear evaluation in this work.
A.2 TWO SUB-PROBLEMS IN AO OF SIMSIAM
In the sub-problem ηt ← arg minη L(θt, η), ηt indicating latent representation of images at step t is actually obtained through ηtx ← ET [ Fθt(T (x)) ] , where they in practice ignore ET [·] and sample only one augmentation T ′, i.e. ηtx ← Fθt(T ′(x)). Conceptually, Chen & He equate the role of predictor to EOA.
A.3 EXPERIMENTAL DETAILS FOR EXPLICIT EOA IN TABLE 1
In the Moving average experiment, we follow the setting in SimSiam (Chen & He, 2021) without predictor. In the Same batch experiment, multiple augmentations, 10 augmentations for instance, are applied on the same image. With multi augmentations, we get the corresponding encoded representation, i.e. zi, i ∈ [1, 10]. We minimize the cosine distance between the first representation z1 and the average of the remaining vectors, i.e. z̄ = 19 ∑10 i=2 zi. The gradient stop is put on the averaged vector. We also experimented with letting the gradient backward through more augmentations, however, they consistently led to collapse.
A.4 EXPERIMENTAL SETUP AND RESULT TREND FOR TABLE 2.
Mirror SimSiam. Here we provide the pseudocode for Mirror SimSiam. In the Mirror SimSiam experiment which relates to Fig. 1 (c). Without taking symmetric loss into account, the pseudocode is shown in Algorithm 1. Taking symmetric loss into account, the pseudocode is shown in Algorithm 2.
Algorithm 1 Pytorch-like Pseudocode: Mirror SimSiam
# f: encoder (backbone + projector) # h: predictor
for x in loader: # load a minibatch x with n samples x_a, x_b = aug(x), aug(x) # augmentation z_a, z_b = f(x_a), f(x_b) # projections
p_b = h(z_b.detach()) # detach z_b but still allowing gradient p_b
L = D_cosine(z_a, p_b) # loss
L.backward() # back-propagate update(f, h) # SGD update
def D_cosine(z, p): # negative cosine similarity z = normalize(z, dim=1) # l2-normalize p = normalize(p, dim=1) # l2-normalize return -(z*p).sum(dim=1).mean()
Algorithm 2 Pytorch-like Pseudocode: Mirror SimSiam
# f: encoder (backbone + projector) # h: predictor
for x in loader: # load a minibatch x with n samples x_a, x_b = aug(x), aug(x) # augmentation z_a, z_b = f(x_a), f(x_b) # projections
p_b = h(z_b.detach()) # detach z_b but still allowing gradient p_b p_a = h(z_a.detach()) # detach z_a but still allowing gradient p_a
L = D_cosine(z_a, p_b)/2 + D_cosine(z_b, p_a)/2 # loss
L.backward() # back-propagate update(f, h) # SGD update
def D_cosine(z, p): # negative cosine similarity z = normalize(z, dim=1) # l2-normalize p = normalize(p, dim=1) # l2-normalize return -(z*p).sum(dim=1).mean()
Symmetric Predictor. To implement the SimSiam with Symmetric Predictor as in Fig. 2 (b), we can just perceive the predictor as part of the new encoder, for which the pseudocode is provided in Algorithm 3. Alternatively, we can additionally train the predictor similarly as that in SimSiam, for which the training involves two losses, one for training the predictor and another for training the new encoder (the corresponding pseudocode is provided in Algorithm 4). Moreover, for the second implementation, we also experiment with another variant that fixes the predictor while optimizing the new encoder and then train the predictor alternatingly. All of them lead to collapse with a similar trend as long as the symmetric predictor is used for training the encoder. For avoiding redundancy, in Fig. 8 we only report the result of the second implementation.
Result trend. The result trend of SimSiam, Naive Siamese, Mirror SimSiam, Symmetric Predictor are shown in Fig. 8. We observe that all architectures lead to collapse except for SimSiam. Mirroe SimSiam was stopped in the middle because a NaN value was returned from the loss.
A.5 EXPERIMENTAL DETAILS FOR INVERSE PREDICTOR.
In the inverse predictor experiment which relates to Fig. 2 (c), we introduce a new predictor which has the same structure as that of the original predictor. The training loss consists of 3 parts: predictor training loss, inverse predictor training and new encoder (old encoder+predictor) training. The new
Algorithm 3 Pytorch-like Pseudocode: Symmetric Predictor
# f: encoder (backbone + projector) # h: predictor
for x in loader: # load a minibatch x with n samples x_a, x_b = aug(x), aug(x) # augmentation z_a, z_b = f(x_a), f(x_b) # projections p_a, p_b = h(z_a), h(z_b) # predictions
L = D(p_a, p_b)/2 + D(p_b, p_a)/2 # loss
L.backward() # back-propagate update(f, h) # SGD update
def D(p, z): # negative cosine similarity z = z.detach() # stop gradient p = normalize(p, dim=1) # l2-normalize z = normalize(z, dim=1) # l2-normalize return -(p*z).sum(dim=1).mean()
Algorithm 4 Pytorch-like Pseudocode: Symmetric Predictor (with additional training on predictor)
# f: encoder (backbone + projector) # h: predictor
for x in loader: # load a minibatch x with n samples x_a, x_b = aug(x), aug(x) # augmentation z_a, z_b = f(x_a), f(x_b) # projections p_a, p_b = h(z_a), h(z_b) # predictions
d_p_a, d_p_b = h(z_a.detach()), h(z_b.detach()) # detached predictor output
# predictor training loss L_pred = D(d_p_a, z_b)/2 + D(d_p_b, z_a)/2
# encoder training loss L_enc = D(p_a, d_p_b)/2 + D(p_b, d_p_a)/2
L = L_pred + L_enc
L.backward() # back-propagate update(f, h) # SGD update
def D(p, z): # negative cosine similarity with detach on z z = z.detach() # stop gradient p = normalize(p, dim=1) # l2-normalize z = normalize(z, dim=1) # l2-normalize return -(p*z).sum(dim=1).mean()
encoder F consists of the old encoder f + predictor h. The practice of gradient stop needs to be considered in the implementation. We provide the pseudocode in Algorithm 5.
Algorithm 5 Pytorch-like Pseudocode: Trainable Inverse Predictor
# f: encoder (backbone + projector) # h: predictor # h_inv: inverse predictor
for x in loader: # load a minibatch x with n samples x_a, x_b = aug(x), aug(x) # augmentation z_a, z_b = f(x_a), f(x_b) # projections p_a, p_b = h(z_a), h(z_b) # predictions
d_p_a, d_p_b = h(z_a.detach()), h(z_b.detach()) # detached predictor output # predictor training loss L_pred = D(d_p_a, z_b)/2 + D(d_p_b, z_a)/2 # to train h
inv_p_a, inv_p_b = h_inv(p_a.detach()), h_inv(p_b.detach()) # to train h_inv # inverse predictor training loss L_inv_pred = D(inv_p_a, z_a)/2 + D(inv_p_b, z_b)/2
# encoder training loss L_enc = D(p_a, h_inv(p_b))/2 + D(p_b, h_inv(p_a))
L = L_pred + L_inv_pred + L_enc
L.backward() # back-propagate update(f, h, h_inv) # SGD update
def D(p, z): # negative cosine similarity with detach on z z = z.detach() # stop gradient p = normalize(p, dim=1) # l2-normalize z = normalize(z, dim=1) # l2-normalize return -(p*z).sum(dim=1).mean()
A.6 REGULARIZATION LOSS
Following Zbontar et al. (2021), we compute covariance regularization loss of encoder output along the mini-batch. The pseudocode for de-correlation loss calculation is put in Algorithm 6.
Algorithm 6 Pytorch-like Pseudocode: De-correlation loss
# Z_a: representation vector # N: batch size # D: the number of dimension for representation vector
Z_a = Z_a - Z_a.mean(dim=0)
cov = Z_a.T @ Z_a / (N-1) diag = torch.eye(D)
loss = cov[˜diag.bool()].pow_(2).sum() / D
A.7 GRADIENT DERIVATION AND TEMPERATURE ANALYSIS FOR INFONCE
With · indicating the cosine similarity between vectors, the InfoNCE loss can be expressed as
LInfoNCE = − log exp(Za ·Zb/τ) exp(Za ·Zb/τ) + ∑N i=1 exp(Za ·Zi/τ)
= − log exp(Za ·Zb/τ)∑N i=0 exp(Za ·Zi/τ) ,
(5)
where N indicates the number of negative samples and Z0 = Zb for simplifying the notation. By treating Za · Zi as the logit in a normal CE loss, we have the corresponding probability for each negative sample as λi =
exp(Za·Zi/τ)∑N i=0 exp(Za·Zi/τ) , where i = 0, 1, 2, ..., N and we have ∑N i=0 λi = 1.
The negative gradient of the InfoNCE on the representation Za is shown as
−∂LInfoNCE ∂Za = 1 τ (1− λ0)Zb − 1 τ N∑ i=1 λiZi
= 1
τ (Zb − N∑ i=0 λiZi)
= 1
τ (Zb − N∑ i=0 λi(oz + ri))
= 1
τ (Zb + (−oz − N∑ i=0 λiri)
∝ Zb + (−oz − N∑ i=0 λiri)
(6)
where 1τ can be adjusted through learning rate and is omitted for simple discussion. With Zb as the basic gradient, Ge = −oz − ∑N i=0 λiri, for which oe = −oz and re = − ∑N i=0 λiri. When the temperature is set to a large value, λi = exp(Za·Zi/τ)∑N i=0 exp(Za·Zi/τ)
, approaches 1N+1 , indicated by a high entropy value (see Fig. 7). InfoNCE will degenerate to a simple contrastive loss, i.e. Lsimple = −Za · Zb + 1N+1 ∑N i=0 Za · Zi , which repulses every negative sample with an equal force. In contrast, a relative smaller temperature will give more relative weight, i.e. larger λ, to negative samples that are more similar to the anchor (Za).
The influence of the temperature on the covariance and accuracy is shown in Fig. 7 (b) and (c). We observe that a higher temperature tends to decrease the effect of de-correlation, indicated by a higher covariance value, which also leads to a performance drop. This verifies our hypothesis regarding on how re in InfoNCE achieves de-correlation because a large temperature causes more balanced weights λi, which is found to alleviate the effect of de-correlation. For the setup, we note that the encoder is trained for 200 epochs with the default setting in Solo-learn for the SimCLR framework.
A.8 THEORETICAL DERIVATION FOR A SINGLE BIAS LAYER
With the cosine similarity loss defined as Eq 7 Eq 8:
cossim(a, b) = a · b√ a2 · b2 , (7)
for which the derived gradient on the vector a is shown as
∂
∂a cossim(a, b) = b1 |a| · |b| − cossim(a, b) · a1 |a|2 . (8)
The above equation is used as a prior for our following derivations. As indicated in the main manuscript, the encoder output za is l2-normalized before feeding into the predictor, thus pa = Za + bp, bp denotes the bias layer in the predictor. The cosine similarity loss (ignoring the symmetry for simplicity) is shown as
Lcosine = −Pa ·Zb = − pa ||pa|| · zb ‖zb‖
(9)
The gradient on pa is derived as
−∂Lcosine ∂pa = zb ‖zb‖ · ‖pa‖ − cossim(Za,Zb) · pa ||pa||2
= 1
‖pa‖ ( zb ‖zb‖ − cossim(Za,Zb) · Pa )
= 1
‖pa‖
( Zb − cossim(Za,Zb) ·
Za + bp ‖pa‖ ) = 1
‖pa‖
( (oz + rb)− cossim(Za,Zb)
‖pa‖ · (oz + ra + bp) ) = 1
‖pa‖ ((oz + rb)−m · (oz + ra + bp))
= 1
‖pa‖ ((1−m)oz −mbp + rb −m · ra) ,
(10)
where m = cossim(Za,Zb)‖pa‖ .
Given that pa = Za + bp, the negative gradient on bp is the same as that on pa as
−∂Lcosine ∂bp = −∂Lcosine ∂pa
= 1
‖pa‖ ((1−m)oz −mbp + rb −m · ra) .
(11)
We assume that the training is stable and the bias layer converges to a certain value when −∂cossim(Za,Zb)∂bp = 0. Thus, the converged bp satisfies the following constraint:
1
‖pa‖ ((1−m)oz −mbp + rb −mra)) = 0
bp = 1−m m oz + 1 m rb − ra.
(12)
With a batch of samples, the average of 1mrb and ra is expected to be close to 0 by the definition of residual vector. Thus, the bias layer vector is expected to converge to:
bp = 1−m m oz. (13)
Rational behind the high similarity between bp and oz . The above theoretical derivation shows that the parameters in the bias layer are excepted to converge to a vector 1−mm oz . This theoretical derivation justifies why the empirically observed cosine similarity between bp and oz is as high as 0.99. Ideally, it should be 1, however, such a small deviation is expected with the training dynamics taken into account.
Rational behind how a single bias layer prevents collapse. Given that pa = Za+bp, the negative gradient on Za is shown as
−∂Lcosine ∂Za = −∂Lcosine ∂pa
= 1
‖pa‖
( Zb − cossim(Za,Zb) ·
Za + bp ‖pa‖ ) = 1
‖pa‖ Zb −
cossim(Za,Zb)
‖pa‖2 Za −
cossim(Za,Zb)
‖pa‖2 bp.
(14)
Here, we highlight that since the loss −Za ·Za = −1 is a constant having zero gradients on the encoder,− cossim(Za,Zb)‖pa‖2 Za can be seen as a dummy term. Considering Eq 13 andm = cossim(Za,Zb) ‖pa‖ ,
we have b = ( ‖pa‖cossim(Za,Zb) − 1)oz . The above equation is equivalent to
−∂Lcosine ∂Za = 1 ‖pa‖ Zb − cossim(Za,Zb) ‖pa‖2 bp
= 1
‖pa‖ Zb −
cossim(Za,Zb)
‖pa‖2 ( ‖pa‖ cossim(Za,Zb) − 1)oz
= 1
‖pa‖ Zb −
1 ‖pa‖ (1− cossim(Za,Zb) ‖pa‖ )oz
∝ Zb − (1− cossim(Za,Zb)
‖pa‖ )oz.
(15)
With Zb as the basic gradient, the extra gradient component Ge = −(1− cossim(Za,Zb)‖pa‖ )oz . Given that pa = Za +bp and ‖Za‖ = 1, thus ‖pa‖ < 1 only when Za is negatively correlated with bp. In practice, however, Za and bp are often positively correlated to some extent due to their shared center vector component. In other words, ‖pa‖ > 1. Moreover, cossim(Za,Zb) is smaller than 1, thus −(1− cossim(Za,Zb)‖pa‖ ) < 0, suggesting Ge consists of negative oz with the effect of de-centerization. This above derivation justifies the rationale why a single bias layer can help alleviate collapse.
B DISCUSSION: DOES BN HELP AVOID COLLAPSE?
To our knowledge, our work is the first to revisit and refute the explanatory claims in (Chen & He, 2021). Several works, however, have attempted to demystify the success of BYOL (Grill et al., 2020), a close variant of SimSiam. The success has been ascribed to BN in (Fetterman & Albrecht, 2020), however, (Richemond et al., 2020) refutes their claim. Since the role of intermediate BNs is ascribed to stabilize training (Richemond et al., 2020; Chen & He, 2021), we only discuss the final BN in the SimSiam encoder. Note that with our Conjecture1, the final BN that removes the mean of representation vector is supposed to have de-centering effect. BY default SimSiam has such a BN at the end of its encoder, however, it still collapses with the predictor and stop gradient. Why would such a BN not prevent collapse in this case? Interestingly, we observe that such BN can help alleviate collapse with a simple MSE loss (see Fig. 9), however, its performance is is inferior to the cosine loss-based SimSiam (with predictor and stop gradient) due to the lack of the de-correlation effect in SimSiam. Note that the cosine loss is in essence equivalent to a MSE loss on the l2normalized vectors. This phenomenon can be interpreted as that the l2-normalization causes another mean after the BN removes it. Thus, with such l2-normalization in the MSE loss, i.e. adopting the default cosine loss, it is important to remove the oe from the optimization target. The results with the loss of −Za · sg(Zb + oe) in Table 3 show that this indeed prevents collapse and verifies the above interpretation. | 1. What is the main contribution of the paper, and how does it explain the behavior of SimSiam and Info-NCE?
2. What are the strengths and weaknesses of the paper's analysis and experiments?
3. Are there any concerns or confusion regarding the paper's interpretation of SimSiam's explanation?
4. How does the paper's theory unify self-supervised learning with and without negative samples?
5. Can you provide more details about the experimental results in section 2.2 and 2.3?
6. Are there any typos or unclear concepts in the paper that need to be addressed? | Summary Of The Paper
Review | Summary Of The Paper
The paper proposes another explanation for why SimSiam can avoid collapse without negative samples. Specifically, the paper decomposes the gradient of learned representation as center vector and residual vector and finds that the center vector gradient has the de-centering effect and the residual gradient vector has the de-correlation effect. Such an explanation can also be applied to Info-NCE, which unifies the theory of self-supervised learning with and without negative samples.
Review
Pros:
The paper investigates the effects of center vectors and residual vectors in detail for both SimSiam and Info-NCE, which provides a unified explanation.
The results of SimSiam++ which shows a simple bias as a predictor can also avoid collapse without negative samples are interesting.
Cons: 0. The writing of the paper needs to be improved. Many concepts are not explained or defined very clearly.
The original SimSiam paper only claimed that "The usage of h may fill this gap (of missing EOA)." I think it's clear that the predictor does not learn to approximate the EOA. So I don't think the paper's interpretation of SimSiam's explanation is correct.
In section 2.2, the paper claims that explicit EOA does not prevent collapse. But the experimental details are not explained very clearly here. I'm wondering whether the paper still uses one or two augmentations as the predictor's outputs or all the augmentations are used without stop-gradient.
In section 2.3, the paper mentions "The results in Fig. 3(b)" show that it still leads to collapse. But I cannot find the collapsed results in Fig. 3(b). Besides, to prove Mirror SimSiam does not work (Fig 1. (c)), the authors should not apply stop-gradient to the predictor, because it's clear in the original SimSiam paper that fixed init does not work for the predictor. One possible way is to apply the gradient on z_a and p_b, and apply stop-gradient on z_b.
In section 3.1, the paper mentions "Note that Z is l2-normalized, thus the trend of mo and mr is expected to be opposite of each other." This does not always hold.
Possible typos:
Section 2.3: Fig1 (c) and Fig2 (a) both lead to success ==> Fig1 (c) and Fig2 (a) both lead to failure
Section 3.1: loss - Z_a * sg(o_z) and loss - Z_a * sg(Z_a - o_z) ==> loss - Z_a \cdot sg(o_z) and loss - Z_a \cdot sg(Z_a - o_z) Figure 3: - Z_a \cdot sg(Z_b - E(Z_b) ==> - Z_a \cdot sg(Z_b - E(Z_b))
Section 3.2: Z_n and r_n are used without definitions, which I guess means the representation for the negative examples. |
ICLR | Title
How Does SimSiam Avoid Collapse Without Negative Samples? A Unified Understanding with Self-supervised Contrastive Learning
Abstract
To avoid collapse in self-supervised learning (SSL), a contrastive loss is widely used but often requires a large number of negative samples. Without negative samples yet achieving competitive performance, a recent work (Chen & He, 2021) has attracted significant attention for providing a minimalist simple Siamese (SimSiam) method to avoid collapse. However, the reason for how it avoids collapse without negative samples remains not fully clear and our investigation starts by revisiting the explanatory claims in the original SimSiam. After refuting their claims, we introduce vector decomposition for analyzing the collapse based on the gradient analysis of the l2-normalized representation vector. This yields a unified perspective on how negative samples and SimSiam alleviate collapse. Such a unified perspective comes timely for understanding the recent progress in SSL.
1 INTRODUCTION
Beyond the success of NLP (Lan et al., 2020; Radford et al., 2019; Devlin et al., 2019; Su et al., 2020; Nie et al., 2020), self-supervised learning (SSL) has also shown its potential in the field of vision tasks (Li et al., 2021; Chen et al., 2021; El-Nouby et al., 2021). Without the ground-truth label, the core of most SSL methods lies in learning an encoder with augmentation-invariant representation (Bachman et al., 2019; He et al., 2020; Chen et al., 2020a; Caron et al., 2020; Grill et al., 2020). Specifically, they often minimize the representation distance between two positive samples, i.e. two augmented views of the same image, based on a Siamese network architecture (Bromley et al., 1993). It is widely known that for such Siamese networks there exists a degenerate solution, i.e. all outputs “collapsing” to an undesired constant (Chen et al., 2020a; Chen & He, 2021). Early works have attributed the collapse to lacking a repulsive component in the optimization goal and adopted contrastive learning (CL) with negative samples, i.e. views of different samples, to alleviate this problem. Introducing momentum into the target encoder, BYOL shows that Siamese architectures can be trained with only positive pairs. More recently, SimSiam (Chen & He, 2021) has caught great attention by further simplifying BYOL by removing the momentum encoder, which has been seen as a major milestone achievement in SSL for providing a minimalist method for achieving competitive performance. However, more investigation is required for the following question:
How does SimSiam avoid collapse without negative samples?
Our investigation starts with revisiting the explanatory claims in the original SimSiam paper (Chen & He, 2021). Notably, two components, i.e. stop gradient and predictor, are essential for the success of SimSiam (Chen & He, 2021). The reason has been mainly attributed to the stop gradient (Chen & He, 2021) by hypothesizing that it implicitly involves two sets of variables and SimSiam behaves like alternating between optimizing each set. Chen & He argue that the predictor h is helpful in SimSiam because h fills the gap to approximate expectation over augmentations (EOA).
Unfortunately, the above explanatory claims are found to be flawed due to reversing the two paths with and without gradient (see Sec. 2.2). This motivates us to find an alternative explanation, for which we introduce a simple yet intuitive framework for facilitating the analysis of collapse in SSL.
∗equal contribution
Specifically, we propose to decompose a representation vector into center and residual components. This decomposition facilitates understanding which gradient component is beneficial for avoiding collapse. Under this framework, we show that a basic Siamese architecture cannot prevent collapse, for which an extra gradient component needs to be introduced. With SimSiam interpreted as processing the optimization target with an inverse predictor, the analysis of its extra gradient shows that (a) its center vector helps prevent collapse via the de-centering effect; (b) its residual vector achieves dimensional de-correlation which also alleviates collapse.
Moreover, under the same gradient decomposition, we find that the extra gradient caused by negative samples in InfoNCE (He et al., 2019; Chen et al., 2020b;a; Tian et al., 2019; Khosla et al., 2020) also achieves de-centering and de-correlation in the same manner. It contributes to a unified understanding on various frameworks in SSL, which also inspires the investigation of hardnessawareness Wang & Liu (2021) from the inter-anchor perspective Zhang et al. (2022) for further bridging the gap between CL and non-CL frameworks in SSL. Finally, simplifying the predictor for more explainable SimSiam, we show that a single bias layer is sufficient for preventing collapse.
The basic experimental settings for our analysis are detailed in Appendix A.1 with a more specific setup discussed in the context. Overall, our work is the first attempt for performing a comprehensive study on how SimSiam avoids collapse without negative samples. Several works, however, have attempted to demystify the success of BYOL (Grill et al., 2020), a close variant of SimSiam. A technical report (Fetterman & Albrecht, 2020) has suggested the importance of batch normalization (BN) in BYOL for its success, however, a recent work (Richemond et al., 2020) refutes their claim by showing BYOL works without BN, which is discussed in Appendix B.
2 REVISITING SIMSIAM AND ITS EXPLANATORY CLAIMS
l2-normalized vector and optimization goal. SSL trains an encoder f for learning discriminative representation and we denote such representation as a vector z, i.e. f(x) = z where x is a certain input. For the augmentation-invariant representation, a straightforward goal is to minimize the distance between the representations of two positive samples, i.e. augmented views of the same image, for which mean squared error (MSE) is a default choice. To avoid scale ambiguity, the vectors are often l2-normalized, i.e. Z = z/||z|| (Chen & He, 2021), before calculating the MSE:
LMSE = (Za −Zb)2/2− 1 = −Za ·Zb = Lcosine, (1) which shows the equivalence of a normalized MSE loss to the cosine loss (Grill et al., 2020).
Collapse in SSL and solution of SimSiam. Based on a Siamese architecture, the loss in Eq 1 causes the collapse, i.e. f always outputs a constant regardless of the input variance. We refer to this Siamese architecture with loss Eq 1 as Naive Siamese in the remainder of paper. Contrastive loss with negative samples is a widely used solution (Chen et al., 2020a). Without using negative samples, SimSiam solves the collapse problem via predictor and stop gradient, based on which the encoder is optimized with a symmetric loss:
LSimSiam = −(Pa · sg(Zb) + Pb · sg(Za)), (2) where sg(·) is stop gradient and P is the output of predictor h, i.e. p = h(z) and P = p/||p||.
2.1 REVISING EXPLANATORY CLAIMS IN SIMSIAM
Interpreting stop gradient as AO. Chen & He hypothesize that the stop gradient in Eq 2 is an implementation of Alternating between the Optimization of two sub-problems, which is denoted as AO. Specifically, with the loss considered as L(θ, η) = Ex,T [∥∥Fθ(T (x)) − ηx∥∥2], the optimization objective minθ,η L(θ, η) can be solved by alternating ηt ← arg minη L(θt, η) and θt+1 ← arg minθ L(θ, ηt). It is acknowledged that this hypothesis does not fully explain why the collapse is prevented (Chen & He, 2021). Nonetheless, they mainly attribute SimSiam success to the stop gradient with the interpretation that AO might make it difficult to approach a constant ∀x. Interpreting predictor as EOA. The AO problem (Chen & He, 2021) is formulated independent of predictor h, for which they believe that the usage of predictor h is related to approximating EOA for filling the gap of ignoring ET [·] in a sub-problem of AO. The approximation of ET [·] is summarized
in Appendix A.2. Chen & He support their interpretation by proof-of-concept experiments. Specifically, they show that updating ηx with a moving-average ηtx ← m ∗ ηtx + (1 −m) ∗ Fθt(T ′(x)) can help prevent collapse without predictor (see Fig. 1 (b)). Given that the training completely fails when the predictor and moving average are both removed, at first sight, their reasoning seems valid.
2.2 DOES THE PREDICTOR FILL THE GAP TO APPROXIMATE EOA?
Reasoning flaw. Considering the stop gradient, we divide the framework into two sub-models with different paths and term them Gradient Path (GP) and Stop Gradient Path (SGP). For SimSiam, only the sub-model with GP includes the predictor (see Fig. 1 (a)). We point out that their reasoning flaw of predictor analysis lies in the reverse of GP and SGP. By default, the moving-average sub-model, as shown in Fig. 1 (b), is on the same side as SGP. Note that Fig. 1 (b) is conceptually similar to Fig. 1 (c) instead of Fig. 1 (a). It is worth mentioning that the Mirror SimSiam in Fig. 1 (c) is what stop gradient in the original SimSiam avoids. Therefore, it is problematic to perceive h as EOA.
New Figure 1
Explicit EOA does not prevent collapse. (Chen & He, 2021) points out that “in practice, it would be unrealistic to actually compute the expectation ET [·]. But it may be possible for a neural network (e.g., the preditor h) to learn to predict the expectation, while the sampling of T is implicitly distributed across multiple epochs.” If implicitly sampling across multiple epochs is beneficial, explicitly sampling sufficient large N augmentations in a batch with the latest model would be more beneficial to approximate ET [·]. However, Table 1 shows that the collapse still occurs and suggests that the equivalence between predictor and EOA does not hold.
2.3 ASYMMETRIC INTERPRETATION OF PREDICTOR WITH STOP GRADIENT IN SIMSIAM
Symmetric Predictor does not prevent collapse. The difference between Naive Siamese and Simsiam lies in whether the gradient in backward propagation flows through a predictor, however, we show that this propagation helps avoid collapse only when the predictor is not included in the SGP path. With h being trained the same as Eq 2, we optimize the encoder f through replacing the Z in Eq 2 with P . The results in Table. 2 show that it still leads to collapse. Actually, this is well expected by perceiving h to be part of the new encoder F , i.e. p = F (x) = h(f(x)). In other words, the symmetric architectures with and without predictor h both lead to collapse.
Predictor with stop gradient is asymmetric. Clearly, how SimSiam avoids collapse lies in its asymmetric architecture, i.e. one path with h and the other without h. Under this asymmetric architecture, the role of stop gradient is to only allow the path with predictor to be optimized with the encoder output as the target, not vice versa. In other words, the SimSiam avoids collapse by excluding Mirror SimSiam (Fig. 1 (c)) which has a loss (mirror-like Eq 2) asLMirror = −(Pa ·Zb+Pb ·Za), where stop gradient is put on the input of h, i.e. pa = h(sg[za]) and pb = h(sg[zb]).
Predictor vs. inverse predictor. We interpret h as a function mapping from z to p, and introduce a conceptual inverse mapping h−1, i.e. z = h−1(p). Here, as shown in Table 2, SimSiam with symmetric predictor (Fig. 2 (b)) leads to collapse, while SimSiam (Fig. 1 (a)) avoids collapse. With the conceptual h−1, we interpret Fig. 1 (a) the same as Fig. 2 (c) which differs from Fig. 2 (b) via changing the optimization target from pb to zb, i.e. zb = h−1(pb). This interpretation
suggests that the collapse can be avoided by processing the optimization target with h−1. By contrast, Fig. 1 (c) and Fig. 2 (a) both lead to collapse, suggesting that processing the optimization target with h is not beneficial for preventing collapse. Overall, asymmetry alone does not guarantee collapse avoidance, which requires the optimization target to be processed by h−1 not h.
Trainable inverse predictor and its implication on EOA. In the above, we propose a conceptual inverse predictor h−1 in Fig. 2 (c), however, it remains yet unknown whether such an inverse predictor is experimentally trainable. A detailed setup for this investigation is reported in Appendix A.5. The results in Fig. 3 show that a learnable h−1 leads to slightly inferior performance, which is expected because h−1 cannot make the trainable inverse predictor output z∗b completely the same as zb. Note that it would be equivalent to SimSiam if z∗b = zb. Despite a slight performance drop, the results confirm that h−1 is trainable. The fact that h−1 is trainable provides additional evidence that the role h plays in SimSiam is not EOA
because theoretically h−1 cannot restore a random augmentation T ′ from an expectation p, where p = h(z) = ET [ Fθt(T (x)) ] .
3 VECTOR DECOMPOSITION FOR UNDERSTANDING COLLAPSE
By default, InfoNCE (Chen et al., 2020a) and SimSiam (Chen & He, 2021) both adopt l2normalization in their loss for avoiding scale ambiguity. We treat the l2-normalized vector, i.e. Z, as the encoder output, which significantly simplifies gradient derivation and the following analysis.
Vector decomposition. For the purpose of analysis, we propose to decompose Z into two parts, Z = o + r, where o, r denote center vector and residual vector respectively. Specifically, the center vector o is defined as an average of Z over the whole representation space oz = E[Z]. However, we approximate it with all vectors in current mini-batch, i.e. oz = 1M ∑M m=1 Zm, where M is the mini-batch size. We define the residual vector r as the residual part of Z, i.e. r = Z − oz .
3.1 COLLAPSE FROM THE VECTOR PERSPECTIVE
Collapse: from result to cause. A Naive Siamese is well expected to collapse since the loss is designed to minimize the distance between positive samples, for which a constant constitutes an optimal solution to minimize such loss. When the collapse occurs, ∀i,Zi = 1M ∑M m=1 Zm = oz , where i denotes a random sample index, which shows the constant vector is oz in this case. This interpretation only suggests a possibility that a dominant o can be one of the viable solutions, while the optimization, such as SimSiam, might still lead to a non-collapse solution. This merely describes o as the consequence of the collapse, and our work investigates the cause of such collapse through analyzing the influence of individual gradient components, i.e. o and r during training.
Competition between o and r. Complementary to the Standard Deviation (Std) (Chen & He, 2021), for indicating collapse, we introduce the ratio of o in z, i.e. mo = ||o||/||z||, where || ∗ || is the L2 norm. Similarly, the ratio of r in z is defined as mr = ||r||/||z||. When collapse happens, i.e. all vectors Z are close to the center vector o, mo approaches 1 and mr approaches 0, which is not desirable for SSL. A desirable case would be a relatively small mo and a relatively large mr, suggesting a relatively small (large) contribution of o (r) in each Z. We interpret the cause of collapse as a competition between o and r where o dominates over r, i.e. mo mr. For Eq 1, the derived negative gradient on Za (ignoring Zb for simplicity due to symmetry) is shown as:
Gcosine = − ∂LMSE ∂Za = Zb −Za ⇐⇒ − ∂Lcosine ∂Za = Zb, (3)
where the gradient component Za is a dummy term because the loss −Za · Za = −1 is a constant having zero gradient on the encoder f .
Conjecture1. With Za = oz + ra, we conjecture that the gradient component of oz is expected to update the encoder to boost the center vector thus increasemo, while the gradient component of ra is expected to behave in the opposite direction to increase mr. A random gradient component is expected to have a relatively small influence.
To verify the above conjecture, we revisit the dummy gradient term Za. We design loss −Za · sg(oz) and −Za · sg(Za − oz) to show the influence of gradient component o and ra respectively. The results in Fig. 4 show that the gradient component oz has the effect of increasingmo while decreasingmr. On the
contrary, ra helps increase mr while decreasing mo. Overall, the results verify Conjecture1.
3.2 EXTRA GRADIENT COMPONENT FOR ALLEVIATING COLLAPSE
Revisit collapse in a symmetric architecture. Based on Conjecture1, here, we provide an intuitive interpretation on why a symmetric Siamese architecture, such as Fig. 2 (a) and (b), cannot be trained without collapse. Take Fig. 2 (a) as example, the gradient in Eq 3 can be interpreted as two equivalent forms, from which we choose Zb−Za = (oz+rb)−(oz+ra) = rb−ra. Since rb comes from the same positive sample as ra, it is expected that rb also increases mr, however, this effect is expected to be smaller than that of ra, thus causing collapse.
Basic gradient and Extra gradient components. The negative gradient on Za in Fig. 2 (a) is derived as Zb, while that on Pa in Fig. 2 (b) is derived as Pb. We perceive Zb and Pb in these basic Siamese architectures as the Basic Gradient. Our above interpretation shows that such basic components cannot prevent collapse, for which an Extra Gradient component, denoted as Ge, needs to be introduced to break the symmetry. As the term suggests, Ge is defined as a gradient term that is relative to the basic gradient in a basic Siamese architecture. For example, negative samples can be introduced to Naive Siamese (Fig. 2 (a)) for preventing collapse, where the extra gradient caused by negative samples can thus be perceived as Ge with Zb as the basic gradient. Similarly, we can also disentangle the negative gradient on Pa in SimSiam (Fig. 1 (a)), i.e. Zb, into a basic gradient (which is Pb) and Ge which is derived as Zb −Pb (note that Zb = Pb + Ge). We analyze how Ge prevents collapse via studying the independent roles of its center vector oe and residual vector re.
3.3 A TOY EXAMPLE EXPERIMENT WITH NEGATIVE SAMPLE
Which repulsive component helps avoid collapse? Existing works often attribute the collapse in Naive Siamese to lacking a repulsive part during the optimization. This explanation has motivated previous works to adopt contrastive learning, i.e. attracting the positive samples while repulsing the negative samples. We experiment with a simple triplet loss1, Ltri = −Za·sg(Zb −Zn), where Zn indicates the representation of a Negative sample. The derived negative gradient on Za is Zb −Zn, where Zb is the basic gradient component and thus Ge = −Zn in this setup. For a sample representation, what determines it as a positive sample for attracting or a negative sample for repulsing is the residual component, thus it might be tempting to interpret that re is the key component of repulsive part that avoids the collapse. However, the results in Table 3 show that the component beneficial for preventing collapse inside Ge is oe instead of re. Specifically, to explore the individual influence of oe and re in the Ge, we design two experiments by removing one component while keeping the other one. In the first experiment, we remove the re in Ge while keeping the oe. By contrast, the oe is removed while keeping the re in the second experiment. In contrast to what existing explanations may expect, we find that the residual component oe prevents collapses. With Conjecture1, a gradient component alleviates collapse if it has negative center vector. In this setup, oe = −oz , thus oe has the de-centering role for preventing collapse. On the contrary, re does not prevent collapse and keeping re even decreases the performance (36.21% < 47.41%). Since the negative sample is randomly chosen, re just behaves like a random noise on the optimization to decrease performance.
3.4 DECOMPOSED GRADIENT ANALYSIS IN SIMSIAM
It is challenging to derive the gradient on the encoder output in SimSiam due to a nonlinear MLP module in h. The negative gradient on Pa for LSimSiam in Eq 2 can be derived as
GSimSiam = − ∂LSimSiam
∂Pa = Zb = Pb + (Zb − Pb) = Pb + Ge, (4)
oe re Collapse Top-1 (%) X X × 66.62 X × × 48.08 × X × 66.15 × × X 1
Table 4: Gradient component analysis for SimSiam.
where Ge indicates the aforementioned extra gradient component. To investigate the influence of oe and re on the collapse, similar to the analysis with the toy example experiment in Sec. 3.3, we design the experiment by removing one component while keeping the other. The results are reported in Table 4. As expected, the model collapses when both components in Ge are removed and the best performance is achieved when both components are kept. Interestingly, the model does not collapse when
either oe or re is kept. To start, we analyze how oe affects the collapse based on Conjecture1.
How oe alleviates collapse in SimSiam. Here, op is used to denote the center vector of P to differentiate from the above introduced oz for denoting that of Z. In this setup Ge = Zb − Pb, thus the residual gradient component is derived to be oe = oz − op. With Conjecture1, it is well expected that oe helps prevent collapse if oe contains negative op since the analyzed vector is Pa. To determine the amount of component of op existing in oe, we measure the cosine similarity between oe − ηpop and op for a wide range of ηp. The results in Fig. 5 (a) show that their cosine similarity is zero when ηp is around −0.5, suggesting oe has ≈ −0.5op. With Conjecture1, this negative ηp explains why SimSiam avoids collapse from the perspective of de-centering.
How oe causes collapse in Mirror SimSiam. As mentioned above, the collapse occurs in Mirror SimSiam, which can also be explained by analyzing its oe. Here, oe = op − oz , for which we evaluate the amount of component oz existing in oe via reporting the similarity between oe − ηzoz
1Note that the triplet loss here does not have clipping form as in Schroff et al. (2015) for simplicity.
and oz . The results in Fig. 5 (a) show that their cosine similarity is zero when ηz is set to around 0.2. This positive ηz explains why Fig. 1(c) causes collapse from the perspective of de-centering.
Overall, we find that processing the optimization target with h−1, as in Fig. 2 (c), alleviates collapse (ηp ≈ −0.5), while processing it with h, as in Fig. 1(c), actually strengthens the collapse (ηz ≈ 0.2). In other words, via the analysis of oe, our results help explain how SimSiam avoids collapse as well as how Mirror SimSiam causes collapse from a straightforward de-centering perspective.
Relation to prior works. Motivated from preventing the collapse to a constant, multiple prior works, such as W-MSE (Ermolov et al., 2021), Barlow-twins (Zbontar et al., 2021), DINO (Caron et al., 2021), explicitly adopt de-centering to prevent collapse. Despite various motivations, we find that they all implicitly introduce an oe that contains a negative center vector. The success of their approaches aligns well with our Conjecture1 as well as our above empirical results. Based on our findings, we argue that the effect of de-centering can be perceived as oe having a negative center vector. With this interpretation, we are the first to demonstrate that how SimSiam with predictor and stop gradient avoids collapse can be explained from the perspective of de-centering.
Beyond de-centering for avoiding collapse. In the toy example experiment in Sec. 3.3, re is found to be not beneficial for preventing collapse and keeping re even decreases the performance. Interestingly, as shown in Table 4, we find that re alone is sufficient for preventing collapse and achieves comparable performance as Ge. This can be explained from the perspective of dimensional de-correlation, which will be discussed in Sec. 3.5.
3.5 DIMENSIONAL DE-CORRELATION HELPS PREVENT COLLAPSE
Conjecture2 and motivation. We conjecture that dimensional de-correlation increases mr for preventing collapse. The motivation is straightforward as follows. The dimensional correlation would be minimum if only a single dimension has a very high value for every individual class and the dimension changes for different classes. In another extreme case, when all the dimensions have the same values, equivalent to having a single dimension, which already collapses by itself in the sense of losing representation capacity. Conceptually, re has no direct influence on the center vector, thus we interpret that re prevents collapse through increasing mr.
To verify the above conjecture, we train SimSiam normally with the loss in Eq 2 and train for several epochs with the loss in Eq 1 for intentionally decreasing the mr to close to zero. Then, we train the loss with only a correlation regularization term, which is detailed in Appendix A.6. The results in Fig. 5 (b) show that this regularization term increases mr at a very fast rate.
Dimensional de-correlation in SimSiam. Assuming h only has a single FC layer to exclude the influence of oe, the weights in FC are expected to learn the correlation between different dimensions for the encoder output. This interpretation echos well with the finding that the eigenspace of hweight aligns well with that of correlation matrix (Tian et al., 2021). In essence, the h is trained to minimize the cosine similarity between h(za) and I(zb), where I is identity mapping. Thus, h that learns the correlation is optimized close to I , which is conceptually equivalent to optimizing with the goal of de-correlation for Z. As shown in Table 4, for SimSiam, re alone also prevents collapse, which
is attributed to the de-correlation effect since re has no de-centering effect. We observe from Fig. 6 that except in the first few epochs, SimSiam decreases the covariance during the whole training. Fig. 6 also reports the results for InfoNCE which will be discussed in Sec. 4.
4 TOWARDS A UNIFIED UNDERSTANDING OF RECENT PROGRESS IN SSL
De-centering and de-correlation in InfoNCE. InfoNCE loss is a default choice in multiple seminal contrastive learning frameworks (Sohn, 2016; Wu et al., 2018; Oord et al., 2018; Wang & Liu, 2021). The derived negative gradient of InfoNCE on Za is proportional to Zb + ∑N i=0−λiZi, where λi = exp(Za·Zi/τ)∑N i=0 exp(Za·Zi/τ) , and Z0 = Zb for notation simplicity. See Appendix A.7 for the detailed
derivation. The extra gradient component Ge = ∑N i=0−λiZi = −oz − ∑N i=0 λiri, for which
oe = −oz and re = − ∑N i=0 λiri. Clearly, oe contains negative oz as de-centering for avoiding collapse, which is equivalent to the toy example in Sec. 3.3 when the re is removed. Regarding re, the main difference between Ltri in the toy example and InfoNCE is that the latter exploits a batch of negative samples instead of a random one. λi is proportional to exp(Za·Zi), indicating that a large weight is put on the negative sample when it is more similar to the anchor Za, for which, intuitively, its dimensional values tend to have a high correlation with Za. Thus, re containing such negative representation with a high weight tends to decrease dimensional correlation. To verify this intuition, we measure the cosine similarity between re and the gradient on Za induced by a correlation regularization loss. The results in Fig. 5 (c) show that their gradient similarity is high for a wide range of temperature values, especially when τ is around 0.1 or 0.2, suggesting re achieves similar role as an explicit regularization loss for performing de-correlation. Replacing re with oe leads to a low cosine similarity, which is expected because oe has no de-correlation effect.
The results of InfoNCE in Fig. 6 resembles that of SimSiam in terms of the overall trend. For example, InfoNCE also decreases the covariance value during training. Moreover, we also report the results of InfoNCE where re is removed for excluding the de-correlation effect. Removing re from the InfoNCE loss leads to a high covariance value during the whole training. Removing re also leads to a significant performance drop, which echos with the finding in (Bardes et al., 2021) that dimensional de-correlation is essential for competitive performance. Regarding how re in InfoNCE achieves de-correlation, formally, we hypothesize that the de-correlation effect in InfoNCE arises from the biased weights (λi) on negative samples. This hypothesis is corroborated by the temperature analysis in Fig. 7. We find that a higher temperature makes the weight distribution of λi more balanced indicated a higher entropy of λi, which echos with the finding in (Wang & Liu, 2021). Moreover, we observe that a higher temperature also tends to increase the covariance value. Overall, with temperature as the control variable, we find that more balanced weights among negative samples decrease the de-correlation effect, which constitutes an evidence for our hypothesis.
Unifying SimSiam and InfoNCE. At first sight, there is no conceptual similarity between SimSiam and InfoNCE, and this is why the community is intrigued by the success of SimSiam without negative samples. Through decomposing the Ge into oe and re, we find that for both, their oe plays the role of de-centering and their re behaves like de-correlation. In this sense, we bring two seemingly irrelevant frameworks into a unified perspective with disentangled de-centering and de-correlation.
Beyond SimSiam and InfoNCE. In SSL, there is a trend of performing explicit manipulation of de-centering and de-correlation, for which W-MSE (Ermolov et al., 2021), Barlow-twins (Zbontar et al., 2021), DINO (Caron et al., 2021) are three representative works. They often achieve performance comparable to those with InfoNCE or SimSiam. Towards a unified understanding of recent progress in SSL, our work is most similar to a concurrent work (Bardes et al., 2021). Their work is mainly inspired by Barlow-twins (Zbontar et al., 2021) but decomposes its loss into three explicit components. By contrast, our work is motivated to answer the question of how SimSiam prevents
collapse without negative samples. Their work claims that variance component (equivalent to decentering) is an indispensable component for preventing collapse, while we find that de-correlation itself alleviates collapse. Overall, our work helps understand various frameworks in SSL from an unified perspective, which also inspires an investigation of inter-anchor hardness-awareness Zhang et al. (2022) for further bridging the gap between CL and non-CL frameworks in SSL.
5 TOWARDS SIMPLIFYING THE PREDICTOR IN SIMSIAM
Based on our understanding of how SimSiam prevents collapse, we demonstrate that simple components (instead of a non-linear MLP in SimSiam) in the predictor are sufficient for preventing collapse. For example, to achieve dimensional de-correlation, a single FC layer might be sufficient because a single FC layer can realize the interaction among various dimensions. On the other hand, to achieve de-centering, a single bias layer might be sufficient because a bias vector can represent the center vector. Attaching an l2-normalization layer at the end of the encoder, i.e. before the predictor, is found to be critical for achieving the above goal.
Pridictor with FC layers. To learn the dimensional correlation, an FC layer is sufficient theoretically but can be difficult to train in practice. Inspired by the property that Multiple FC layers make the training more stable even though they can be mathematically equivalent to a single FC layer (Bell-Kligler et al., 2019), we adopt two consecutive FC layers which are equivalent to removing the BN and ReLU in the original predictor.
The training can be made more stable if a Tanh layer is applied on the adopted single FC after every iteration. Table 5 shows that they achieve performance comparable to that with a non-linear MLP.
Predictor with a bias layer. A predictor with a single bias layer can be utilized for preventing collapse (see Table 5) and the trained bias vector is found to have a cosine similarity of 0.99 with the center vector (see Table 6). A bias in the MLP predictor also has a high cosine similarity of 0.89, suggesting that it is not a coincidence. A theoretical derivation for justifying such a
high similarity as well as how this single bias layer prevents collapse are discussed in Appendix A.8.
6 CONCLUSION
We point out a hidden flaw in prior works for explaining the success of SimSiam and propose to decompose the representation vector and analyze the decomposed components of extra gradient. We find that its center vector gradient helps prevent collapse via the de-centering effect and its residual gradient achieves de-correlation which also alleviates collapse. Our further analysis reveals that InfoNCE achieve the two effects in a similar manner, which bridges the gap between SimSiam and InfoNCE and contributes to a unified understanding of recent progress in SSL. Towards simplifying the predictor we have also found that a single bias layer is sufficient for preventing collapse.
ACKNOWLEDGEMENT
This work was partly supported by Institute for Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) under grant No.2019-001396 (Development of framework for analyzing, detecting, mitigating of bias in AI model and training data), No.2021-0-01381 (Development of Causal AI through Video Understanding and Reinforcement Learning, and Its Applications to Real Environments) and No.2021-0-02068 (Artificial Intelligence Innovation Hub). During the rebuttal, multiple anonymous reviewers provide valuable advice to significantly improve the quality of this work. Thank you all.
A APPENDIX
A.1 EXPERIMENTAL SETTINGS
Self-supervised encoder training: Below are the settings for self-supervised encoder training. For simplicity, we mainly use the default settings in a popular open library termed solo-learn (da Costa et al., 2021).
Data augmentation and normalization: We use a series of transformations including RandomResizedCrop with scale [0.2, 1.0], bicubic interpolation. ColorJitter (brightness (0.4), contrast (0.4), saturation (0.4), hue (0.1)) is randomly applied with the probability of 0.8. Random gray scale RandomGrayscale is applied with p = 0.2 Horizontal flip is applied with p = 0.5 The images are normalized with the mean (0.4914, 0.4822, 0.4465) and Std (0.247, 0.243, 0.261).
Network architecture and initialization: The backbone architecture is ResNet-18. The projection head contains three fully-connected (FC) layers followed by Batch Norm (BN) and ReLU, for which ReLU in the final FC layer is removed, i.e. FC1+BN+ReLU+FC2+BN+ReLU+FC3+BN . All projection FC layers have 2048 neurons for input, output as well as the hidden dimensions. The predictor head includes two FC layers as follows: FC1 + BN + ReLU + FC2. Input and output of the predictor both have the dimension of 2048, while the hidden dimension is 512. All layers of the network are by default initialized in Pytorch.
Optimizer: SGD optimizer is used for the encoder training. The batch size M is 256 and the learning rate is linearly scaled by the formula lr × M/256 with the base learning rate lr set to 0.5. The schedule for learning rate adopts the cosine decay as SimSiam. Momentum 0.9 and weight decay 1.0 × 10−5 are used for SGD. We use one GPU for each pre-training experiment. Following the practice of SimSiam, the learning rate of the predictor is fixed during the training. We use warmup training for the first 10 epochs. If not specified, by default we train the model for 1000 epochs.
Online linear evaluation: For the online linear revaluation, we also follow the practice in the solo-learn library (da Costa et al., 2021). The frozen features (2048 dimensions) from the training set are extracted (from the self-supervised pre-trained model) to feed into a linear classifier (1 FC layer with the input 2048 and output of 100). The test is performed on the validation set. The learning rate for the linear classifier is 0.1. Overall, we report Top-1 accuracy with the online linear evaluation in this work.
A.2 TWO SUB-PROBLEMS IN AO OF SIMSIAM
In the sub-problem ηt ← arg minη L(θt, η), ηt indicating latent representation of images at step t is actually obtained through ηtx ← ET [ Fθt(T (x)) ] , where they in practice ignore ET [·] and sample only one augmentation T ′, i.e. ηtx ← Fθt(T ′(x)). Conceptually, Chen & He equate the role of predictor to EOA.
A.3 EXPERIMENTAL DETAILS FOR EXPLICIT EOA IN TABLE 1
In the Moving average experiment, we follow the setting in SimSiam (Chen & He, 2021) without predictor. In the Same batch experiment, multiple augmentations, 10 augmentations for instance, are applied on the same image. With multi augmentations, we get the corresponding encoded representation, i.e. zi, i ∈ [1, 10]. We minimize the cosine distance between the first representation z1 and the average of the remaining vectors, i.e. z̄ = 19 ∑10 i=2 zi. The gradient stop is put on the averaged vector. We also experimented with letting the gradient backward through more augmentations, however, they consistently led to collapse.
A.4 EXPERIMENTAL SETUP AND RESULT TREND FOR TABLE 2.
Mirror SimSiam. Here we provide the pseudocode for Mirror SimSiam. In the Mirror SimSiam experiment which relates to Fig. 1 (c). Without taking symmetric loss into account, the pseudocode is shown in Algorithm 1. Taking symmetric loss into account, the pseudocode is shown in Algorithm 2.
Algorithm 1 Pytorch-like Pseudocode: Mirror SimSiam
# f: encoder (backbone + projector) # h: predictor
for x in loader: # load a minibatch x with n samples x_a, x_b = aug(x), aug(x) # augmentation z_a, z_b = f(x_a), f(x_b) # projections
p_b = h(z_b.detach()) # detach z_b but still allowing gradient p_b
L = D_cosine(z_a, p_b) # loss
L.backward() # back-propagate update(f, h) # SGD update
def D_cosine(z, p): # negative cosine similarity z = normalize(z, dim=1) # l2-normalize p = normalize(p, dim=1) # l2-normalize return -(z*p).sum(dim=1).mean()
Algorithm 2 Pytorch-like Pseudocode: Mirror SimSiam
# f: encoder (backbone + projector) # h: predictor
for x in loader: # load a minibatch x with n samples x_a, x_b = aug(x), aug(x) # augmentation z_a, z_b = f(x_a), f(x_b) # projections
p_b = h(z_b.detach()) # detach z_b but still allowing gradient p_b p_a = h(z_a.detach()) # detach z_a but still allowing gradient p_a
L = D_cosine(z_a, p_b)/2 + D_cosine(z_b, p_a)/2 # loss
L.backward() # back-propagate update(f, h) # SGD update
def D_cosine(z, p): # negative cosine similarity z = normalize(z, dim=1) # l2-normalize p = normalize(p, dim=1) # l2-normalize return -(z*p).sum(dim=1).mean()
Symmetric Predictor. To implement the SimSiam with Symmetric Predictor as in Fig. 2 (b), we can just perceive the predictor as part of the new encoder, for which the pseudocode is provided in Algorithm 3. Alternatively, we can additionally train the predictor similarly as that in SimSiam, for which the training involves two losses, one for training the predictor and another for training the new encoder (the corresponding pseudocode is provided in Algorithm 4). Moreover, for the second implementation, we also experiment with another variant that fixes the predictor while optimizing the new encoder and then train the predictor alternatingly. All of them lead to collapse with a similar trend as long as the symmetric predictor is used for training the encoder. For avoiding redundancy, in Fig. 8 we only report the result of the second implementation.
Result trend. The result trend of SimSiam, Naive Siamese, Mirror SimSiam, Symmetric Predictor are shown in Fig. 8. We observe that all architectures lead to collapse except for SimSiam. Mirroe SimSiam was stopped in the middle because a NaN value was returned from the loss.
A.5 EXPERIMENTAL DETAILS FOR INVERSE PREDICTOR.
In the inverse predictor experiment which relates to Fig. 2 (c), we introduce a new predictor which has the same structure as that of the original predictor. The training loss consists of 3 parts: predictor training loss, inverse predictor training and new encoder (old encoder+predictor) training. The new
Algorithm 3 Pytorch-like Pseudocode: Symmetric Predictor
# f: encoder (backbone + projector) # h: predictor
for x in loader: # load a minibatch x with n samples x_a, x_b = aug(x), aug(x) # augmentation z_a, z_b = f(x_a), f(x_b) # projections p_a, p_b = h(z_a), h(z_b) # predictions
L = D(p_a, p_b)/2 + D(p_b, p_a)/2 # loss
L.backward() # back-propagate update(f, h) # SGD update
def D(p, z): # negative cosine similarity z = z.detach() # stop gradient p = normalize(p, dim=1) # l2-normalize z = normalize(z, dim=1) # l2-normalize return -(p*z).sum(dim=1).mean()
Algorithm 4 Pytorch-like Pseudocode: Symmetric Predictor (with additional training on predictor)
# f: encoder (backbone + projector) # h: predictor
for x in loader: # load a minibatch x with n samples x_a, x_b = aug(x), aug(x) # augmentation z_a, z_b = f(x_a), f(x_b) # projections p_a, p_b = h(z_a), h(z_b) # predictions
d_p_a, d_p_b = h(z_a.detach()), h(z_b.detach()) # detached predictor output
# predictor training loss L_pred = D(d_p_a, z_b)/2 + D(d_p_b, z_a)/2
# encoder training loss L_enc = D(p_a, d_p_b)/2 + D(p_b, d_p_a)/2
L = L_pred + L_enc
L.backward() # back-propagate update(f, h) # SGD update
def D(p, z): # negative cosine similarity with detach on z z = z.detach() # stop gradient p = normalize(p, dim=1) # l2-normalize z = normalize(z, dim=1) # l2-normalize return -(p*z).sum(dim=1).mean()
encoder F consists of the old encoder f + predictor h. The practice of gradient stop needs to be considered in the implementation. We provide the pseudocode in Algorithm 5.
Algorithm 5 Pytorch-like Pseudocode: Trainable Inverse Predictor
# f: encoder (backbone + projector) # h: predictor # h_inv: inverse predictor
for x in loader: # load a minibatch x with n samples x_a, x_b = aug(x), aug(x) # augmentation z_a, z_b = f(x_a), f(x_b) # projections p_a, p_b = h(z_a), h(z_b) # predictions
d_p_a, d_p_b = h(z_a.detach()), h(z_b.detach()) # detached predictor output # predictor training loss L_pred = D(d_p_a, z_b)/2 + D(d_p_b, z_a)/2 # to train h
inv_p_a, inv_p_b = h_inv(p_a.detach()), h_inv(p_b.detach()) # to train h_inv # inverse predictor training loss L_inv_pred = D(inv_p_a, z_a)/2 + D(inv_p_b, z_b)/2
# encoder training loss L_enc = D(p_a, h_inv(p_b))/2 + D(p_b, h_inv(p_a))
L = L_pred + L_inv_pred + L_enc
L.backward() # back-propagate update(f, h, h_inv) # SGD update
def D(p, z): # negative cosine similarity with detach on z z = z.detach() # stop gradient p = normalize(p, dim=1) # l2-normalize z = normalize(z, dim=1) # l2-normalize return -(p*z).sum(dim=1).mean()
A.6 REGULARIZATION LOSS
Following Zbontar et al. (2021), we compute covariance regularization loss of encoder output along the mini-batch. The pseudocode for de-correlation loss calculation is put in Algorithm 6.
Algorithm 6 Pytorch-like Pseudocode: De-correlation loss
# Z_a: representation vector # N: batch size # D: the number of dimension for representation vector
Z_a = Z_a - Z_a.mean(dim=0)
cov = Z_a.T @ Z_a / (N-1) diag = torch.eye(D)
loss = cov[˜diag.bool()].pow_(2).sum() / D
A.7 GRADIENT DERIVATION AND TEMPERATURE ANALYSIS FOR INFONCE
With · indicating the cosine similarity between vectors, the InfoNCE loss can be expressed as
LInfoNCE = − log exp(Za ·Zb/τ) exp(Za ·Zb/τ) + ∑N i=1 exp(Za ·Zi/τ)
= − log exp(Za ·Zb/τ)∑N i=0 exp(Za ·Zi/τ) ,
(5)
where N indicates the number of negative samples and Z0 = Zb for simplifying the notation. By treating Za · Zi as the logit in a normal CE loss, we have the corresponding probability for each negative sample as λi =
exp(Za·Zi/τ)∑N i=0 exp(Za·Zi/τ) , where i = 0, 1, 2, ..., N and we have ∑N i=0 λi = 1.
The negative gradient of the InfoNCE on the representation Za is shown as
−∂LInfoNCE ∂Za = 1 τ (1− λ0)Zb − 1 τ N∑ i=1 λiZi
= 1
τ (Zb − N∑ i=0 λiZi)
= 1
τ (Zb − N∑ i=0 λi(oz + ri))
= 1
τ (Zb + (−oz − N∑ i=0 λiri)
∝ Zb + (−oz − N∑ i=0 λiri)
(6)
where 1τ can be adjusted through learning rate and is omitted for simple discussion. With Zb as the basic gradient, Ge = −oz − ∑N i=0 λiri, for which oe = −oz and re = − ∑N i=0 λiri. When the temperature is set to a large value, λi = exp(Za·Zi/τ)∑N i=0 exp(Za·Zi/τ)
, approaches 1N+1 , indicated by a high entropy value (see Fig. 7). InfoNCE will degenerate to a simple contrastive loss, i.e. Lsimple = −Za · Zb + 1N+1 ∑N i=0 Za · Zi , which repulses every negative sample with an equal force. In contrast, a relative smaller temperature will give more relative weight, i.e. larger λ, to negative samples that are more similar to the anchor (Za).
The influence of the temperature on the covariance and accuracy is shown in Fig. 7 (b) and (c). We observe that a higher temperature tends to decrease the effect of de-correlation, indicated by a higher covariance value, which also leads to a performance drop. This verifies our hypothesis regarding on how re in InfoNCE achieves de-correlation because a large temperature causes more balanced weights λi, which is found to alleviate the effect of de-correlation. For the setup, we note that the encoder is trained for 200 epochs with the default setting in Solo-learn for the SimCLR framework.
A.8 THEORETICAL DERIVATION FOR A SINGLE BIAS LAYER
With the cosine similarity loss defined as Eq 7 Eq 8:
cossim(a, b) = a · b√ a2 · b2 , (7)
for which the derived gradient on the vector a is shown as
∂
∂a cossim(a, b) = b1 |a| · |b| − cossim(a, b) · a1 |a|2 . (8)
The above equation is used as a prior for our following derivations. As indicated in the main manuscript, the encoder output za is l2-normalized before feeding into the predictor, thus pa = Za + bp, bp denotes the bias layer in the predictor. The cosine similarity loss (ignoring the symmetry for simplicity) is shown as
Lcosine = −Pa ·Zb = − pa ||pa|| · zb ‖zb‖
(9)
The gradient on pa is derived as
−∂Lcosine ∂pa = zb ‖zb‖ · ‖pa‖ − cossim(Za,Zb) · pa ||pa||2
= 1
‖pa‖ ( zb ‖zb‖ − cossim(Za,Zb) · Pa )
= 1
‖pa‖
( Zb − cossim(Za,Zb) ·
Za + bp ‖pa‖ ) = 1
‖pa‖
( (oz + rb)− cossim(Za,Zb)
‖pa‖ · (oz + ra + bp) ) = 1
‖pa‖ ((oz + rb)−m · (oz + ra + bp))
= 1
‖pa‖ ((1−m)oz −mbp + rb −m · ra) ,
(10)
where m = cossim(Za,Zb)‖pa‖ .
Given that pa = Za + bp, the negative gradient on bp is the same as that on pa as
−∂Lcosine ∂bp = −∂Lcosine ∂pa
= 1
‖pa‖ ((1−m)oz −mbp + rb −m · ra) .
(11)
We assume that the training is stable and the bias layer converges to a certain value when −∂cossim(Za,Zb)∂bp = 0. Thus, the converged bp satisfies the following constraint:
1
‖pa‖ ((1−m)oz −mbp + rb −mra)) = 0
bp = 1−m m oz + 1 m rb − ra.
(12)
With a batch of samples, the average of 1mrb and ra is expected to be close to 0 by the definition of residual vector. Thus, the bias layer vector is expected to converge to:
bp = 1−m m oz. (13)
Rational behind the high similarity between bp and oz . The above theoretical derivation shows that the parameters in the bias layer are excepted to converge to a vector 1−mm oz . This theoretical derivation justifies why the empirically observed cosine similarity between bp and oz is as high as 0.99. Ideally, it should be 1, however, such a small deviation is expected with the training dynamics taken into account.
Rational behind how a single bias layer prevents collapse. Given that pa = Za+bp, the negative gradient on Za is shown as
−∂Lcosine ∂Za = −∂Lcosine ∂pa
= 1
‖pa‖
( Zb − cossim(Za,Zb) ·
Za + bp ‖pa‖ ) = 1
‖pa‖ Zb −
cossim(Za,Zb)
‖pa‖2 Za −
cossim(Za,Zb)
‖pa‖2 bp.
(14)
Here, we highlight that since the loss −Za ·Za = −1 is a constant having zero gradients on the encoder,− cossim(Za,Zb)‖pa‖2 Za can be seen as a dummy term. Considering Eq 13 andm = cossim(Za,Zb) ‖pa‖ ,
we have b = ( ‖pa‖cossim(Za,Zb) − 1)oz . The above equation is equivalent to
−∂Lcosine ∂Za = 1 ‖pa‖ Zb − cossim(Za,Zb) ‖pa‖2 bp
= 1
‖pa‖ Zb −
cossim(Za,Zb)
‖pa‖2 ( ‖pa‖ cossim(Za,Zb) − 1)oz
= 1
‖pa‖ Zb −
1 ‖pa‖ (1− cossim(Za,Zb) ‖pa‖ )oz
∝ Zb − (1− cossim(Za,Zb)
‖pa‖ )oz.
(15)
With Zb as the basic gradient, the extra gradient component Ge = −(1− cossim(Za,Zb)‖pa‖ )oz . Given that pa = Za +bp and ‖Za‖ = 1, thus ‖pa‖ < 1 only when Za is negatively correlated with bp. In practice, however, Za and bp are often positively correlated to some extent due to their shared center vector component. In other words, ‖pa‖ > 1. Moreover, cossim(Za,Zb) is smaller than 1, thus −(1− cossim(Za,Zb)‖pa‖ ) < 0, suggesting Ge consists of negative oz with the effect of de-centerization. This above derivation justifies the rationale why a single bias layer can help alleviate collapse.
B DISCUSSION: DOES BN HELP AVOID COLLAPSE?
To our knowledge, our work is the first to revisit and refute the explanatory claims in (Chen & He, 2021). Several works, however, have attempted to demystify the success of BYOL (Grill et al., 2020), a close variant of SimSiam. The success has been ascribed to BN in (Fetterman & Albrecht, 2020), however, (Richemond et al., 2020) refutes their claim. Since the role of intermediate BNs is ascribed to stabilize training (Richemond et al., 2020; Chen & He, 2021), we only discuss the final BN in the SimSiam encoder. Note that with our Conjecture1, the final BN that removes the mean of representation vector is supposed to have de-centering effect. BY default SimSiam has such a BN at the end of its encoder, however, it still collapses with the predictor and stop gradient. Why would such a BN not prevent collapse in this case? Interestingly, we observe that such BN can help alleviate collapse with a simple MSE loss (see Fig. 9), however, its performance is is inferior to the cosine loss-based SimSiam (with predictor and stop gradient) due to the lack of the de-correlation effect in SimSiam. Note that the cosine loss is in essence equivalent to a MSE loss on the l2normalized vectors. This phenomenon can be interpreted as that the l2-normalization causes another mean after the BN removes it. Thus, with such l2-normalization in the MSE loss, i.e. adopting the default cosine loss, it is important to remove the oe from the optimization target. The results with the loss of −Za · sg(Zb + oe) in Table 3 show that this indeed prevents collapse and verifies the above interpretation. | 1. What is the main contribution of the paper regarding SimSiam and SSLs?
2. What are the strengths and weaknesses of the proposed framework?
3. Do you have any concerns or questions about the hidden flaw in AO of SimSiam?
4. How does the author explain the de-centering and de-correlation effects?
5. Are there any experimental results missing to support the statements made in the paper? If so, which ones?
6. How does the inverse predictor differ from the new predictor?
7. What is the significance of processing the optimized target with h−1?
8. Can you provide more details or explanations for some parts of the paper, such as subsections 3.1, 3.3, and 3.4?
9. Why do mo and mr need to be opposite to each other? | Summary Of The Paper
Review | Summary Of The Paper
This paper proposes a framework to understand why SimSiam avoid collapse without negative samples? It provides a hidden flaw of the Alternating Optimization for explain why SimSiam works. And the authors claim that the center vector gradient has the de-centering effect and the residual gradient vector has the de-correlation effect.
Review
It's interesting to see a framework to unified understand SSLs such as SimSiam, MoCo, SimCLR, etc.
The hidden flaw in AO of SimSiam seems to be correct, which is interesting.
The paper lacks experimental results/details to demonstrate their statements.
a. In subsec. "Symmetric Predictor does not prevent collapse", the authors state "The results in Fig. 3 (b) show that it still leads to collapse" which is related to symmetric predictors in SimSiam. However, Fig. 3 (b) is actually about the basic SimSiam and SimSiam + Inverse predictor.
b. in subsec. "Predictor with stop gradient is asymmetric", the authors "the SimSiam avoids collapse by excluding Mirror SimSiam (Fig1 (c)) which has a loss (mirror-like Eq 2) as shown as eq. 2". There is no experimental evidences to show if the mirror SimSiam will lead to collapse. If experimentally mirror SimSiam works, the statement does not hold.
c. How did you design the inverse predictor? Can we just see it as the new predictor while the previous predictor is included in the projector part?
d. in subsec. "Predictor vs. inverse predictor", "we interpret Fig2 (b) differs from Fig1 (a) as changing the optimized target from p to z, i.e. h −1(p), suggesting processingthe optimized target with h−1 helps prevent collapse." Again, no experimental evidence.
e. in subsec. "Trainable h−1 and its implication on EOA", "we optimize h−1 by optimizing the pa approaching z∗b while simultaneously optimizing z∗b to zb via cosine loss, where z∗b is the h−1 output. The results proves that the model with h−1 (Fig2 (c)) is equivalent to SimSiam since it achieves comparable performance as the original SimSiam that directly optimizes pa approaching zb.". Where are the "results"?
The paper might have flaws in their proposed math and sometimes it's hard to follow 3.1, 3.3, and 3.4
In subsec "Competition between o and r.", why "mo and mr is expected to be opposite of each other". The denominators of the two terms are both ||z||. So only ||o_z|| and ||r_a|| define the values. However, r_a = Z_a − o_z. Thus r_a and o_z can have the same norms but with different directions? Why they need to be on the opposite directions? |
ICLR | Title
How Does SimSiam Avoid Collapse Without Negative Samples? A Unified Understanding with Self-supervised Contrastive Learning
Abstract
To avoid collapse in self-supervised learning (SSL), a contrastive loss is widely used but often requires a large number of negative samples. Without negative samples yet achieving competitive performance, a recent work (Chen & He, 2021) has attracted significant attention for providing a minimalist simple Siamese (SimSiam) method to avoid collapse. However, the reason for how it avoids collapse without negative samples remains not fully clear and our investigation starts by revisiting the explanatory claims in the original SimSiam. After refuting their claims, we introduce vector decomposition for analyzing the collapse based on the gradient analysis of the l2-normalized representation vector. This yields a unified perspective on how negative samples and SimSiam alleviate collapse. Such a unified perspective comes timely for understanding the recent progress in SSL.
1 INTRODUCTION
Beyond the success of NLP (Lan et al., 2020; Radford et al., 2019; Devlin et al., 2019; Su et al., 2020; Nie et al., 2020), self-supervised learning (SSL) has also shown its potential in the field of vision tasks (Li et al., 2021; Chen et al., 2021; El-Nouby et al., 2021). Without the ground-truth label, the core of most SSL methods lies in learning an encoder with augmentation-invariant representation (Bachman et al., 2019; He et al., 2020; Chen et al., 2020a; Caron et al., 2020; Grill et al., 2020). Specifically, they often minimize the representation distance between two positive samples, i.e. two augmented views of the same image, based on a Siamese network architecture (Bromley et al., 1993). It is widely known that for such Siamese networks there exists a degenerate solution, i.e. all outputs “collapsing” to an undesired constant (Chen et al., 2020a; Chen & He, 2021). Early works have attributed the collapse to lacking a repulsive component in the optimization goal and adopted contrastive learning (CL) with negative samples, i.e. views of different samples, to alleviate this problem. Introducing momentum into the target encoder, BYOL shows that Siamese architectures can be trained with only positive pairs. More recently, SimSiam (Chen & He, 2021) has caught great attention by further simplifying BYOL by removing the momentum encoder, which has been seen as a major milestone achievement in SSL for providing a minimalist method for achieving competitive performance. However, more investigation is required for the following question:
How does SimSiam avoid collapse without negative samples?
Our investigation starts with revisiting the explanatory claims in the original SimSiam paper (Chen & He, 2021). Notably, two components, i.e. stop gradient and predictor, are essential for the success of SimSiam (Chen & He, 2021). The reason has been mainly attributed to the stop gradient (Chen & He, 2021) by hypothesizing that it implicitly involves two sets of variables and SimSiam behaves like alternating between optimizing each set. Chen & He argue that the predictor h is helpful in SimSiam because h fills the gap to approximate expectation over augmentations (EOA).
Unfortunately, the above explanatory claims are found to be flawed due to reversing the two paths with and without gradient (see Sec. 2.2). This motivates us to find an alternative explanation, for which we introduce a simple yet intuitive framework for facilitating the analysis of collapse in SSL.
∗equal contribution
Specifically, we propose to decompose a representation vector into center and residual components. This decomposition facilitates understanding which gradient component is beneficial for avoiding collapse. Under this framework, we show that a basic Siamese architecture cannot prevent collapse, for which an extra gradient component needs to be introduced. With SimSiam interpreted as processing the optimization target with an inverse predictor, the analysis of its extra gradient shows that (a) its center vector helps prevent collapse via the de-centering effect; (b) its residual vector achieves dimensional de-correlation which also alleviates collapse.
Moreover, under the same gradient decomposition, we find that the extra gradient caused by negative samples in InfoNCE (He et al., 2019; Chen et al., 2020b;a; Tian et al., 2019; Khosla et al., 2020) also achieves de-centering and de-correlation in the same manner. It contributes to a unified understanding on various frameworks in SSL, which also inspires the investigation of hardnessawareness Wang & Liu (2021) from the inter-anchor perspective Zhang et al. (2022) for further bridging the gap between CL and non-CL frameworks in SSL. Finally, simplifying the predictor for more explainable SimSiam, we show that a single bias layer is sufficient for preventing collapse.
The basic experimental settings for our analysis are detailed in Appendix A.1 with a more specific setup discussed in the context. Overall, our work is the first attempt for performing a comprehensive study on how SimSiam avoids collapse without negative samples. Several works, however, have attempted to demystify the success of BYOL (Grill et al., 2020), a close variant of SimSiam. A technical report (Fetterman & Albrecht, 2020) has suggested the importance of batch normalization (BN) in BYOL for its success, however, a recent work (Richemond et al., 2020) refutes their claim by showing BYOL works without BN, which is discussed in Appendix B.
2 REVISITING SIMSIAM AND ITS EXPLANATORY CLAIMS
l2-normalized vector and optimization goal. SSL trains an encoder f for learning discriminative representation and we denote such representation as a vector z, i.e. f(x) = z where x is a certain input. For the augmentation-invariant representation, a straightforward goal is to minimize the distance between the representations of two positive samples, i.e. augmented views of the same image, for which mean squared error (MSE) is a default choice. To avoid scale ambiguity, the vectors are often l2-normalized, i.e. Z = z/||z|| (Chen & He, 2021), before calculating the MSE:
LMSE = (Za −Zb)2/2− 1 = −Za ·Zb = Lcosine, (1) which shows the equivalence of a normalized MSE loss to the cosine loss (Grill et al., 2020).
Collapse in SSL and solution of SimSiam. Based on a Siamese architecture, the loss in Eq 1 causes the collapse, i.e. f always outputs a constant regardless of the input variance. We refer to this Siamese architecture with loss Eq 1 as Naive Siamese in the remainder of paper. Contrastive loss with negative samples is a widely used solution (Chen et al., 2020a). Without using negative samples, SimSiam solves the collapse problem via predictor and stop gradient, based on which the encoder is optimized with a symmetric loss:
LSimSiam = −(Pa · sg(Zb) + Pb · sg(Za)), (2) where sg(·) is stop gradient and P is the output of predictor h, i.e. p = h(z) and P = p/||p||.
2.1 REVISING EXPLANATORY CLAIMS IN SIMSIAM
Interpreting stop gradient as AO. Chen & He hypothesize that the stop gradient in Eq 2 is an implementation of Alternating between the Optimization of two sub-problems, which is denoted as AO. Specifically, with the loss considered as L(θ, η) = Ex,T [∥∥Fθ(T (x)) − ηx∥∥2], the optimization objective minθ,η L(θ, η) can be solved by alternating ηt ← arg minη L(θt, η) and θt+1 ← arg minθ L(θ, ηt). It is acknowledged that this hypothesis does not fully explain why the collapse is prevented (Chen & He, 2021). Nonetheless, they mainly attribute SimSiam success to the stop gradient with the interpretation that AO might make it difficult to approach a constant ∀x. Interpreting predictor as EOA. The AO problem (Chen & He, 2021) is formulated independent of predictor h, for which they believe that the usage of predictor h is related to approximating EOA for filling the gap of ignoring ET [·] in a sub-problem of AO. The approximation of ET [·] is summarized
in Appendix A.2. Chen & He support their interpretation by proof-of-concept experiments. Specifically, they show that updating ηx with a moving-average ηtx ← m ∗ ηtx + (1 −m) ∗ Fθt(T ′(x)) can help prevent collapse without predictor (see Fig. 1 (b)). Given that the training completely fails when the predictor and moving average are both removed, at first sight, their reasoning seems valid.
2.2 DOES THE PREDICTOR FILL THE GAP TO APPROXIMATE EOA?
Reasoning flaw. Considering the stop gradient, we divide the framework into two sub-models with different paths and term them Gradient Path (GP) and Stop Gradient Path (SGP). For SimSiam, only the sub-model with GP includes the predictor (see Fig. 1 (a)). We point out that their reasoning flaw of predictor analysis lies in the reverse of GP and SGP. By default, the moving-average sub-model, as shown in Fig. 1 (b), is on the same side as SGP. Note that Fig. 1 (b) is conceptually similar to Fig. 1 (c) instead of Fig. 1 (a). It is worth mentioning that the Mirror SimSiam in Fig. 1 (c) is what stop gradient in the original SimSiam avoids. Therefore, it is problematic to perceive h as EOA.
New Figure 1
Explicit EOA does not prevent collapse. (Chen & He, 2021) points out that “in practice, it would be unrealistic to actually compute the expectation ET [·]. But it may be possible for a neural network (e.g., the preditor h) to learn to predict the expectation, while the sampling of T is implicitly distributed across multiple epochs.” If implicitly sampling across multiple epochs is beneficial, explicitly sampling sufficient large N augmentations in a batch with the latest model would be more beneficial to approximate ET [·]. However, Table 1 shows that the collapse still occurs and suggests that the equivalence between predictor and EOA does not hold.
2.3 ASYMMETRIC INTERPRETATION OF PREDICTOR WITH STOP GRADIENT IN SIMSIAM
Symmetric Predictor does not prevent collapse. The difference between Naive Siamese and Simsiam lies in whether the gradient in backward propagation flows through a predictor, however, we show that this propagation helps avoid collapse only when the predictor is not included in the SGP path. With h being trained the same as Eq 2, we optimize the encoder f through replacing the Z in Eq 2 with P . The results in Table. 2 show that it still leads to collapse. Actually, this is well expected by perceiving h to be part of the new encoder F , i.e. p = F (x) = h(f(x)). In other words, the symmetric architectures with and without predictor h both lead to collapse.
Predictor with stop gradient is asymmetric. Clearly, how SimSiam avoids collapse lies in its asymmetric architecture, i.e. one path with h and the other without h. Under this asymmetric architecture, the role of stop gradient is to only allow the path with predictor to be optimized with the encoder output as the target, not vice versa. In other words, the SimSiam avoids collapse by excluding Mirror SimSiam (Fig. 1 (c)) which has a loss (mirror-like Eq 2) asLMirror = −(Pa ·Zb+Pb ·Za), where stop gradient is put on the input of h, i.e. pa = h(sg[za]) and pb = h(sg[zb]).
Predictor vs. inverse predictor. We interpret h as a function mapping from z to p, and introduce a conceptual inverse mapping h−1, i.e. z = h−1(p). Here, as shown in Table 2, SimSiam with symmetric predictor (Fig. 2 (b)) leads to collapse, while SimSiam (Fig. 1 (a)) avoids collapse. With the conceptual h−1, we interpret Fig. 1 (a) the same as Fig. 2 (c) which differs from Fig. 2 (b) via changing the optimization target from pb to zb, i.e. zb = h−1(pb). This interpretation
suggests that the collapse can be avoided by processing the optimization target with h−1. By contrast, Fig. 1 (c) and Fig. 2 (a) both lead to collapse, suggesting that processing the optimization target with h is not beneficial for preventing collapse. Overall, asymmetry alone does not guarantee collapse avoidance, which requires the optimization target to be processed by h−1 not h.
Trainable inverse predictor and its implication on EOA. In the above, we propose a conceptual inverse predictor h−1 in Fig. 2 (c), however, it remains yet unknown whether such an inverse predictor is experimentally trainable. A detailed setup for this investigation is reported in Appendix A.5. The results in Fig. 3 show that a learnable h−1 leads to slightly inferior performance, which is expected because h−1 cannot make the trainable inverse predictor output z∗b completely the same as zb. Note that it would be equivalent to SimSiam if z∗b = zb. Despite a slight performance drop, the results confirm that h−1 is trainable. The fact that h−1 is trainable provides additional evidence that the role h plays in SimSiam is not EOA
because theoretically h−1 cannot restore a random augmentation T ′ from an expectation p, where p = h(z) = ET [ Fθt(T (x)) ] .
3 VECTOR DECOMPOSITION FOR UNDERSTANDING COLLAPSE
By default, InfoNCE (Chen et al., 2020a) and SimSiam (Chen & He, 2021) both adopt l2normalization in their loss for avoiding scale ambiguity. We treat the l2-normalized vector, i.e. Z, as the encoder output, which significantly simplifies gradient derivation and the following analysis.
Vector decomposition. For the purpose of analysis, we propose to decompose Z into two parts, Z = o + r, where o, r denote center vector and residual vector respectively. Specifically, the center vector o is defined as an average of Z over the whole representation space oz = E[Z]. However, we approximate it with all vectors in current mini-batch, i.e. oz = 1M ∑M m=1 Zm, where M is the mini-batch size. We define the residual vector r as the residual part of Z, i.e. r = Z − oz .
3.1 COLLAPSE FROM THE VECTOR PERSPECTIVE
Collapse: from result to cause. A Naive Siamese is well expected to collapse since the loss is designed to minimize the distance between positive samples, for which a constant constitutes an optimal solution to minimize such loss. When the collapse occurs, ∀i,Zi = 1M ∑M m=1 Zm = oz , where i denotes a random sample index, which shows the constant vector is oz in this case. This interpretation only suggests a possibility that a dominant o can be one of the viable solutions, while the optimization, such as SimSiam, might still lead to a non-collapse solution. This merely describes o as the consequence of the collapse, and our work investigates the cause of such collapse through analyzing the influence of individual gradient components, i.e. o and r during training.
Competition between o and r. Complementary to the Standard Deviation (Std) (Chen & He, 2021), for indicating collapse, we introduce the ratio of o in z, i.e. mo = ||o||/||z||, where || ∗ || is the L2 norm. Similarly, the ratio of r in z is defined as mr = ||r||/||z||. When collapse happens, i.e. all vectors Z are close to the center vector o, mo approaches 1 and mr approaches 0, which is not desirable for SSL. A desirable case would be a relatively small mo and a relatively large mr, suggesting a relatively small (large) contribution of o (r) in each Z. We interpret the cause of collapse as a competition between o and r where o dominates over r, i.e. mo mr. For Eq 1, the derived negative gradient on Za (ignoring Zb for simplicity due to symmetry) is shown as:
Gcosine = − ∂LMSE ∂Za = Zb −Za ⇐⇒ − ∂Lcosine ∂Za = Zb, (3)
where the gradient component Za is a dummy term because the loss −Za · Za = −1 is a constant having zero gradient on the encoder f .
Conjecture1. With Za = oz + ra, we conjecture that the gradient component of oz is expected to update the encoder to boost the center vector thus increasemo, while the gradient component of ra is expected to behave in the opposite direction to increase mr. A random gradient component is expected to have a relatively small influence.
To verify the above conjecture, we revisit the dummy gradient term Za. We design loss −Za · sg(oz) and −Za · sg(Za − oz) to show the influence of gradient component o and ra respectively. The results in Fig. 4 show that the gradient component oz has the effect of increasingmo while decreasingmr. On the
contrary, ra helps increase mr while decreasing mo. Overall, the results verify Conjecture1.
3.2 EXTRA GRADIENT COMPONENT FOR ALLEVIATING COLLAPSE
Revisit collapse in a symmetric architecture. Based on Conjecture1, here, we provide an intuitive interpretation on why a symmetric Siamese architecture, such as Fig. 2 (a) and (b), cannot be trained without collapse. Take Fig. 2 (a) as example, the gradient in Eq 3 can be interpreted as two equivalent forms, from which we choose Zb−Za = (oz+rb)−(oz+ra) = rb−ra. Since rb comes from the same positive sample as ra, it is expected that rb also increases mr, however, this effect is expected to be smaller than that of ra, thus causing collapse.
Basic gradient and Extra gradient components. The negative gradient on Za in Fig. 2 (a) is derived as Zb, while that on Pa in Fig. 2 (b) is derived as Pb. We perceive Zb and Pb in these basic Siamese architectures as the Basic Gradient. Our above interpretation shows that such basic components cannot prevent collapse, for which an Extra Gradient component, denoted as Ge, needs to be introduced to break the symmetry. As the term suggests, Ge is defined as a gradient term that is relative to the basic gradient in a basic Siamese architecture. For example, negative samples can be introduced to Naive Siamese (Fig. 2 (a)) for preventing collapse, where the extra gradient caused by negative samples can thus be perceived as Ge with Zb as the basic gradient. Similarly, we can also disentangle the negative gradient on Pa in SimSiam (Fig. 1 (a)), i.e. Zb, into a basic gradient (which is Pb) and Ge which is derived as Zb −Pb (note that Zb = Pb + Ge). We analyze how Ge prevents collapse via studying the independent roles of its center vector oe and residual vector re.
3.3 A TOY EXAMPLE EXPERIMENT WITH NEGATIVE SAMPLE
Which repulsive component helps avoid collapse? Existing works often attribute the collapse in Naive Siamese to lacking a repulsive part during the optimization. This explanation has motivated previous works to adopt contrastive learning, i.e. attracting the positive samples while repulsing the negative samples. We experiment with a simple triplet loss1, Ltri = −Za·sg(Zb −Zn), where Zn indicates the representation of a Negative sample. The derived negative gradient on Za is Zb −Zn, where Zb is the basic gradient component and thus Ge = −Zn in this setup. For a sample representation, what determines it as a positive sample for attracting or a negative sample for repulsing is the residual component, thus it might be tempting to interpret that re is the key component of repulsive part that avoids the collapse. However, the results in Table 3 show that the component beneficial for preventing collapse inside Ge is oe instead of re. Specifically, to explore the individual influence of oe and re in the Ge, we design two experiments by removing one component while keeping the other one. In the first experiment, we remove the re in Ge while keeping the oe. By contrast, the oe is removed while keeping the re in the second experiment. In contrast to what existing explanations may expect, we find that the residual component oe prevents collapses. With Conjecture1, a gradient component alleviates collapse if it has negative center vector. In this setup, oe = −oz , thus oe has the de-centering role for preventing collapse. On the contrary, re does not prevent collapse and keeping re even decreases the performance (36.21% < 47.41%). Since the negative sample is randomly chosen, re just behaves like a random noise on the optimization to decrease performance.
3.4 DECOMPOSED GRADIENT ANALYSIS IN SIMSIAM
It is challenging to derive the gradient on the encoder output in SimSiam due to a nonlinear MLP module in h. The negative gradient on Pa for LSimSiam in Eq 2 can be derived as
GSimSiam = − ∂LSimSiam
∂Pa = Zb = Pb + (Zb − Pb) = Pb + Ge, (4)
oe re Collapse Top-1 (%) X X × 66.62 X × × 48.08 × X × 66.15 × × X 1
Table 4: Gradient component analysis for SimSiam.
where Ge indicates the aforementioned extra gradient component. To investigate the influence of oe and re on the collapse, similar to the analysis with the toy example experiment in Sec. 3.3, we design the experiment by removing one component while keeping the other. The results are reported in Table 4. As expected, the model collapses when both components in Ge are removed and the best performance is achieved when both components are kept. Interestingly, the model does not collapse when
either oe or re is kept. To start, we analyze how oe affects the collapse based on Conjecture1.
How oe alleviates collapse in SimSiam. Here, op is used to denote the center vector of P to differentiate from the above introduced oz for denoting that of Z. In this setup Ge = Zb − Pb, thus the residual gradient component is derived to be oe = oz − op. With Conjecture1, it is well expected that oe helps prevent collapse if oe contains negative op since the analyzed vector is Pa. To determine the amount of component of op existing in oe, we measure the cosine similarity between oe − ηpop and op for a wide range of ηp. The results in Fig. 5 (a) show that their cosine similarity is zero when ηp is around −0.5, suggesting oe has ≈ −0.5op. With Conjecture1, this negative ηp explains why SimSiam avoids collapse from the perspective of de-centering.
How oe causes collapse in Mirror SimSiam. As mentioned above, the collapse occurs in Mirror SimSiam, which can also be explained by analyzing its oe. Here, oe = op − oz , for which we evaluate the amount of component oz existing in oe via reporting the similarity between oe − ηzoz
1Note that the triplet loss here does not have clipping form as in Schroff et al. (2015) for simplicity.
and oz . The results in Fig. 5 (a) show that their cosine similarity is zero when ηz is set to around 0.2. This positive ηz explains why Fig. 1(c) causes collapse from the perspective of de-centering.
Overall, we find that processing the optimization target with h−1, as in Fig. 2 (c), alleviates collapse (ηp ≈ −0.5), while processing it with h, as in Fig. 1(c), actually strengthens the collapse (ηz ≈ 0.2). In other words, via the analysis of oe, our results help explain how SimSiam avoids collapse as well as how Mirror SimSiam causes collapse from a straightforward de-centering perspective.
Relation to prior works. Motivated from preventing the collapse to a constant, multiple prior works, such as W-MSE (Ermolov et al., 2021), Barlow-twins (Zbontar et al., 2021), DINO (Caron et al., 2021), explicitly adopt de-centering to prevent collapse. Despite various motivations, we find that they all implicitly introduce an oe that contains a negative center vector. The success of their approaches aligns well with our Conjecture1 as well as our above empirical results. Based on our findings, we argue that the effect of de-centering can be perceived as oe having a negative center vector. With this interpretation, we are the first to demonstrate that how SimSiam with predictor and stop gradient avoids collapse can be explained from the perspective of de-centering.
Beyond de-centering for avoiding collapse. In the toy example experiment in Sec. 3.3, re is found to be not beneficial for preventing collapse and keeping re even decreases the performance. Interestingly, as shown in Table 4, we find that re alone is sufficient for preventing collapse and achieves comparable performance as Ge. This can be explained from the perspective of dimensional de-correlation, which will be discussed in Sec. 3.5.
3.5 DIMENSIONAL DE-CORRELATION HELPS PREVENT COLLAPSE
Conjecture2 and motivation. We conjecture that dimensional de-correlation increases mr for preventing collapse. The motivation is straightforward as follows. The dimensional correlation would be minimum if only a single dimension has a very high value for every individual class and the dimension changes for different classes. In another extreme case, when all the dimensions have the same values, equivalent to having a single dimension, which already collapses by itself in the sense of losing representation capacity. Conceptually, re has no direct influence on the center vector, thus we interpret that re prevents collapse through increasing mr.
To verify the above conjecture, we train SimSiam normally with the loss in Eq 2 and train for several epochs with the loss in Eq 1 for intentionally decreasing the mr to close to zero. Then, we train the loss with only a correlation regularization term, which is detailed in Appendix A.6. The results in Fig. 5 (b) show that this regularization term increases mr at a very fast rate.
Dimensional de-correlation in SimSiam. Assuming h only has a single FC layer to exclude the influence of oe, the weights in FC are expected to learn the correlation between different dimensions for the encoder output. This interpretation echos well with the finding that the eigenspace of hweight aligns well with that of correlation matrix (Tian et al., 2021). In essence, the h is trained to minimize the cosine similarity between h(za) and I(zb), where I is identity mapping. Thus, h that learns the correlation is optimized close to I , which is conceptually equivalent to optimizing with the goal of de-correlation for Z. As shown in Table 4, for SimSiam, re alone also prevents collapse, which
is attributed to the de-correlation effect since re has no de-centering effect. We observe from Fig. 6 that except in the first few epochs, SimSiam decreases the covariance during the whole training. Fig. 6 also reports the results for InfoNCE which will be discussed in Sec. 4.
4 TOWARDS A UNIFIED UNDERSTANDING OF RECENT PROGRESS IN SSL
De-centering and de-correlation in InfoNCE. InfoNCE loss is a default choice in multiple seminal contrastive learning frameworks (Sohn, 2016; Wu et al., 2018; Oord et al., 2018; Wang & Liu, 2021). The derived negative gradient of InfoNCE on Za is proportional to Zb + ∑N i=0−λiZi, where λi = exp(Za·Zi/τ)∑N i=0 exp(Za·Zi/τ) , and Z0 = Zb for notation simplicity. See Appendix A.7 for the detailed
derivation. The extra gradient component Ge = ∑N i=0−λiZi = −oz − ∑N i=0 λiri, for which
oe = −oz and re = − ∑N i=0 λiri. Clearly, oe contains negative oz as de-centering for avoiding collapse, which is equivalent to the toy example in Sec. 3.3 when the re is removed. Regarding re, the main difference between Ltri in the toy example and InfoNCE is that the latter exploits a batch of negative samples instead of a random one. λi is proportional to exp(Za·Zi), indicating that a large weight is put on the negative sample when it is more similar to the anchor Za, for which, intuitively, its dimensional values tend to have a high correlation with Za. Thus, re containing such negative representation with a high weight tends to decrease dimensional correlation. To verify this intuition, we measure the cosine similarity between re and the gradient on Za induced by a correlation regularization loss. The results in Fig. 5 (c) show that their gradient similarity is high for a wide range of temperature values, especially when τ is around 0.1 or 0.2, suggesting re achieves similar role as an explicit regularization loss for performing de-correlation. Replacing re with oe leads to a low cosine similarity, which is expected because oe has no de-correlation effect.
The results of InfoNCE in Fig. 6 resembles that of SimSiam in terms of the overall trend. For example, InfoNCE also decreases the covariance value during training. Moreover, we also report the results of InfoNCE where re is removed for excluding the de-correlation effect. Removing re from the InfoNCE loss leads to a high covariance value during the whole training. Removing re also leads to a significant performance drop, which echos with the finding in (Bardes et al., 2021) that dimensional de-correlation is essential for competitive performance. Regarding how re in InfoNCE achieves de-correlation, formally, we hypothesize that the de-correlation effect in InfoNCE arises from the biased weights (λi) on negative samples. This hypothesis is corroborated by the temperature analysis in Fig. 7. We find that a higher temperature makes the weight distribution of λi more balanced indicated a higher entropy of λi, which echos with the finding in (Wang & Liu, 2021). Moreover, we observe that a higher temperature also tends to increase the covariance value. Overall, with temperature as the control variable, we find that more balanced weights among negative samples decrease the de-correlation effect, which constitutes an evidence for our hypothesis.
Unifying SimSiam and InfoNCE. At first sight, there is no conceptual similarity between SimSiam and InfoNCE, and this is why the community is intrigued by the success of SimSiam without negative samples. Through decomposing the Ge into oe and re, we find that for both, their oe plays the role of de-centering and their re behaves like de-correlation. In this sense, we bring two seemingly irrelevant frameworks into a unified perspective with disentangled de-centering and de-correlation.
Beyond SimSiam and InfoNCE. In SSL, there is a trend of performing explicit manipulation of de-centering and de-correlation, for which W-MSE (Ermolov et al., 2021), Barlow-twins (Zbontar et al., 2021), DINO (Caron et al., 2021) are three representative works. They often achieve performance comparable to those with InfoNCE or SimSiam. Towards a unified understanding of recent progress in SSL, our work is most similar to a concurrent work (Bardes et al., 2021). Their work is mainly inspired by Barlow-twins (Zbontar et al., 2021) but decomposes its loss into three explicit components. By contrast, our work is motivated to answer the question of how SimSiam prevents
collapse without negative samples. Their work claims that variance component (equivalent to decentering) is an indispensable component for preventing collapse, while we find that de-correlation itself alleviates collapse. Overall, our work helps understand various frameworks in SSL from an unified perspective, which also inspires an investigation of inter-anchor hardness-awareness Zhang et al. (2022) for further bridging the gap between CL and non-CL frameworks in SSL.
5 TOWARDS SIMPLIFYING THE PREDICTOR IN SIMSIAM
Based on our understanding of how SimSiam prevents collapse, we demonstrate that simple components (instead of a non-linear MLP in SimSiam) in the predictor are sufficient for preventing collapse. For example, to achieve dimensional de-correlation, a single FC layer might be sufficient because a single FC layer can realize the interaction among various dimensions. On the other hand, to achieve de-centering, a single bias layer might be sufficient because a bias vector can represent the center vector. Attaching an l2-normalization layer at the end of the encoder, i.e. before the predictor, is found to be critical for achieving the above goal.
Pridictor with FC layers. To learn the dimensional correlation, an FC layer is sufficient theoretically but can be difficult to train in practice. Inspired by the property that Multiple FC layers make the training more stable even though they can be mathematically equivalent to a single FC layer (Bell-Kligler et al., 2019), we adopt two consecutive FC layers which are equivalent to removing the BN and ReLU in the original predictor.
The training can be made more stable if a Tanh layer is applied on the adopted single FC after every iteration. Table 5 shows that they achieve performance comparable to that with a non-linear MLP.
Predictor with a bias layer. A predictor with a single bias layer can be utilized for preventing collapse (see Table 5) and the trained bias vector is found to have a cosine similarity of 0.99 with the center vector (see Table 6). A bias in the MLP predictor also has a high cosine similarity of 0.89, suggesting that it is not a coincidence. A theoretical derivation for justifying such a
high similarity as well as how this single bias layer prevents collapse are discussed in Appendix A.8.
6 CONCLUSION
We point out a hidden flaw in prior works for explaining the success of SimSiam and propose to decompose the representation vector and analyze the decomposed components of extra gradient. We find that its center vector gradient helps prevent collapse via the de-centering effect and its residual gradient achieves de-correlation which also alleviates collapse. Our further analysis reveals that InfoNCE achieve the two effects in a similar manner, which bridges the gap between SimSiam and InfoNCE and contributes to a unified understanding of recent progress in SSL. Towards simplifying the predictor we have also found that a single bias layer is sufficient for preventing collapse.
ACKNOWLEDGEMENT
This work was partly supported by Institute for Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) under grant No.2019-001396 (Development of framework for analyzing, detecting, mitigating of bias in AI model and training data), No.2021-0-01381 (Development of Causal AI through Video Understanding and Reinforcement Learning, and Its Applications to Real Environments) and No.2021-0-02068 (Artificial Intelligence Innovation Hub). During the rebuttal, multiple anonymous reviewers provide valuable advice to significantly improve the quality of this work. Thank you all.
A APPENDIX
A.1 EXPERIMENTAL SETTINGS
Self-supervised encoder training: Below are the settings for self-supervised encoder training. For simplicity, we mainly use the default settings in a popular open library termed solo-learn (da Costa et al., 2021).
Data augmentation and normalization: We use a series of transformations including RandomResizedCrop with scale [0.2, 1.0], bicubic interpolation. ColorJitter (brightness (0.4), contrast (0.4), saturation (0.4), hue (0.1)) is randomly applied with the probability of 0.8. Random gray scale RandomGrayscale is applied with p = 0.2 Horizontal flip is applied with p = 0.5 The images are normalized with the mean (0.4914, 0.4822, 0.4465) and Std (0.247, 0.243, 0.261).
Network architecture and initialization: The backbone architecture is ResNet-18. The projection head contains three fully-connected (FC) layers followed by Batch Norm (BN) and ReLU, for which ReLU in the final FC layer is removed, i.e. FC1+BN+ReLU+FC2+BN+ReLU+FC3+BN . All projection FC layers have 2048 neurons for input, output as well as the hidden dimensions. The predictor head includes two FC layers as follows: FC1 + BN + ReLU + FC2. Input and output of the predictor both have the dimension of 2048, while the hidden dimension is 512. All layers of the network are by default initialized in Pytorch.
Optimizer: SGD optimizer is used for the encoder training. The batch size M is 256 and the learning rate is linearly scaled by the formula lr × M/256 with the base learning rate lr set to 0.5. The schedule for learning rate adopts the cosine decay as SimSiam. Momentum 0.9 and weight decay 1.0 × 10−5 are used for SGD. We use one GPU for each pre-training experiment. Following the practice of SimSiam, the learning rate of the predictor is fixed during the training. We use warmup training for the first 10 epochs. If not specified, by default we train the model for 1000 epochs.
Online linear evaluation: For the online linear revaluation, we also follow the practice in the solo-learn library (da Costa et al., 2021). The frozen features (2048 dimensions) from the training set are extracted (from the self-supervised pre-trained model) to feed into a linear classifier (1 FC layer with the input 2048 and output of 100). The test is performed on the validation set. The learning rate for the linear classifier is 0.1. Overall, we report Top-1 accuracy with the online linear evaluation in this work.
A.2 TWO SUB-PROBLEMS IN AO OF SIMSIAM
In the sub-problem ηt ← arg minη L(θt, η), ηt indicating latent representation of images at step t is actually obtained through ηtx ← ET [ Fθt(T (x)) ] , where they in practice ignore ET [·] and sample only one augmentation T ′, i.e. ηtx ← Fθt(T ′(x)). Conceptually, Chen & He equate the role of predictor to EOA.
A.3 EXPERIMENTAL DETAILS FOR EXPLICIT EOA IN TABLE 1
In the Moving average experiment, we follow the setting in SimSiam (Chen & He, 2021) without predictor. In the Same batch experiment, multiple augmentations, 10 augmentations for instance, are applied on the same image. With multi augmentations, we get the corresponding encoded representation, i.e. zi, i ∈ [1, 10]. We minimize the cosine distance between the first representation z1 and the average of the remaining vectors, i.e. z̄ = 19 ∑10 i=2 zi. The gradient stop is put on the averaged vector. We also experimented with letting the gradient backward through more augmentations, however, they consistently led to collapse.
A.4 EXPERIMENTAL SETUP AND RESULT TREND FOR TABLE 2.
Mirror SimSiam. Here we provide the pseudocode for Mirror SimSiam. In the Mirror SimSiam experiment which relates to Fig. 1 (c). Without taking symmetric loss into account, the pseudocode is shown in Algorithm 1. Taking symmetric loss into account, the pseudocode is shown in Algorithm 2.
Algorithm 1 Pytorch-like Pseudocode: Mirror SimSiam
# f: encoder (backbone + projector) # h: predictor
for x in loader: # load a minibatch x with n samples x_a, x_b = aug(x), aug(x) # augmentation z_a, z_b = f(x_a), f(x_b) # projections
p_b = h(z_b.detach()) # detach z_b but still allowing gradient p_b
L = D_cosine(z_a, p_b) # loss
L.backward() # back-propagate update(f, h) # SGD update
def D_cosine(z, p): # negative cosine similarity z = normalize(z, dim=1) # l2-normalize p = normalize(p, dim=1) # l2-normalize return -(z*p).sum(dim=1).mean()
Algorithm 2 Pytorch-like Pseudocode: Mirror SimSiam
# f: encoder (backbone + projector) # h: predictor
for x in loader: # load a minibatch x with n samples x_a, x_b = aug(x), aug(x) # augmentation z_a, z_b = f(x_a), f(x_b) # projections
p_b = h(z_b.detach()) # detach z_b but still allowing gradient p_b p_a = h(z_a.detach()) # detach z_a but still allowing gradient p_a
L = D_cosine(z_a, p_b)/2 + D_cosine(z_b, p_a)/2 # loss
L.backward() # back-propagate update(f, h) # SGD update
def D_cosine(z, p): # negative cosine similarity z = normalize(z, dim=1) # l2-normalize p = normalize(p, dim=1) # l2-normalize return -(z*p).sum(dim=1).mean()
Symmetric Predictor. To implement the SimSiam with Symmetric Predictor as in Fig. 2 (b), we can just perceive the predictor as part of the new encoder, for which the pseudocode is provided in Algorithm 3. Alternatively, we can additionally train the predictor similarly as that in SimSiam, for which the training involves two losses, one for training the predictor and another for training the new encoder (the corresponding pseudocode is provided in Algorithm 4). Moreover, for the second implementation, we also experiment with another variant that fixes the predictor while optimizing the new encoder and then train the predictor alternatingly. All of them lead to collapse with a similar trend as long as the symmetric predictor is used for training the encoder. For avoiding redundancy, in Fig. 8 we only report the result of the second implementation.
Result trend. The result trend of SimSiam, Naive Siamese, Mirror SimSiam, Symmetric Predictor are shown in Fig. 8. We observe that all architectures lead to collapse except for SimSiam. Mirroe SimSiam was stopped in the middle because a NaN value was returned from the loss.
A.5 EXPERIMENTAL DETAILS FOR INVERSE PREDICTOR.
In the inverse predictor experiment which relates to Fig. 2 (c), we introduce a new predictor which has the same structure as that of the original predictor. The training loss consists of 3 parts: predictor training loss, inverse predictor training and new encoder (old encoder+predictor) training. The new
Algorithm 3 Pytorch-like Pseudocode: Symmetric Predictor
# f: encoder (backbone + projector) # h: predictor
for x in loader: # load a minibatch x with n samples x_a, x_b = aug(x), aug(x) # augmentation z_a, z_b = f(x_a), f(x_b) # projections p_a, p_b = h(z_a), h(z_b) # predictions
L = D(p_a, p_b)/2 + D(p_b, p_a)/2 # loss
L.backward() # back-propagate update(f, h) # SGD update
def D(p, z): # negative cosine similarity z = z.detach() # stop gradient p = normalize(p, dim=1) # l2-normalize z = normalize(z, dim=1) # l2-normalize return -(p*z).sum(dim=1).mean()
Algorithm 4 Pytorch-like Pseudocode: Symmetric Predictor (with additional training on predictor)
# f: encoder (backbone + projector) # h: predictor
for x in loader: # load a minibatch x with n samples x_a, x_b = aug(x), aug(x) # augmentation z_a, z_b = f(x_a), f(x_b) # projections p_a, p_b = h(z_a), h(z_b) # predictions
d_p_a, d_p_b = h(z_a.detach()), h(z_b.detach()) # detached predictor output
# predictor training loss L_pred = D(d_p_a, z_b)/2 + D(d_p_b, z_a)/2
# encoder training loss L_enc = D(p_a, d_p_b)/2 + D(p_b, d_p_a)/2
L = L_pred + L_enc
L.backward() # back-propagate update(f, h) # SGD update
def D(p, z): # negative cosine similarity with detach on z z = z.detach() # stop gradient p = normalize(p, dim=1) # l2-normalize z = normalize(z, dim=1) # l2-normalize return -(p*z).sum(dim=1).mean()
encoder F consists of the old encoder f + predictor h. The practice of gradient stop needs to be considered in the implementation. We provide the pseudocode in Algorithm 5.
Algorithm 5 Pytorch-like Pseudocode: Trainable Inverse Predictor
# f: encoder (backbone + projector) # h: predictor # h_inv: inverse predictor
for x in loader: # load a minibatch x with n samples x_a, x_b = aug(x), aug(x) # augmentation z_a, z_b = f(x_a), f(x_b) # projections p_a, p_b = h(z_a), h(z_b) # predictions
d_p_a, d_p_b = h(z_a.detach()), h(z_b.detach()) # detached predictor output # predictor training loss L_pred = D(d_p_a, z_b)/2 + D(d_p_b, z_a)/2 # to train h
inv_p_a, inv_p_b = h_inv(p_a.detach()), h_inv(p_b.detach()) # to train h_inv # inverse predictor training loss L_inv_pred = D(inv_p_a, z_a)/2 + D(inv_p_b, z_b)/2
# encoder training loss L_enc = D(p_a, h_inv(p_b))/2 + D(p_b, h_inv(p_a))
L = L_pred + L_inv_pred + L_enc
L.backward() # back-propagate update(f, h, h_inv) # SGD update
def D(p, z): # negative cosine similarity with detach on z z = z.detach() # stop gradient p = normalize(p, dim=1) # l2-normalize z = normalize(z, dim=1) # l2-normalize return -(p*z).sum(dim=1).mean()
A.6 REGULARIZATION LOSS
Following Zbontar et al. (2021), we compute covariance regularization loss of encoder output along the mini-batch. The pseudocode for de-correlation loss calculation is put in Algorithm 6.
Algorithm 6 Pytorch-like Pseudocode: De-correlation loss
# Z_a: representation vector # N: batch size # D: the number of dimension for representation vector
Z_a = Z_a - Z_a.mean(dim=0)
cov = Z_a.T @ Z_a / (N-1) diag = torch.eye(D)
loss = cov[˜diag.bool()].pow_(2).sum() / D
A.7 GRADIENT DERIVATION AND TEMPERATURE ANALYSIS FOR INFONCE
With · indicating the cosine similarity between vectors, the InfoNCE loss can be expressed as
LInfoNCE = − log exp(Za ·Zb/τ) exp(Za ·Zb/τ) + ∑N i=1 exp(Za ·Zi/τ)
= − log exp(Za ·Zb/τ)∑N i=0 exp(Za ·Zi/τ) ,
(5)
where N indicates the number of negative samples and Z0 = Zb for simplifying the notation. By treating Za · Zi as the logit in a normal CE loss, we have the corresponding probability for each negative sample as λi =
exp(Za·Zi/τ)∑N i=0 exp(Za·Zi/τ) , where i = 0, 1, 2, ..., N and we have ∑N i=0 λi = 1.
The negative gradient of the InfoNCE on the representation Za is shown as
−∂LInfoNCE ∂Za = 1 τ (1− λ0)Zb − 1 τ N∑ i=1 λiZi
= 1
τ (Zb − N∑ i=0 λiZi)
= 1
τ (Zb − N∑ i=0 λi(oz + ri))
= 1
τ (Zb + (−oz − N∑ i=0 λiri)
∝ Zb + (−oz − N∑ i=0 λiri)
(6)
where 1τ can be adjusted through learning rate and is omitted for simple discussion. With Zb as the basic gradient, Ge = −oz − ∑N i=0 λiri, for which oe = −oz and re = − ∑N i=0 λiri. When the temperature is set to a large value, λi = exp(Za·Zi/τ)∑N i=0 exp(Za·Zi/τ)
, approaches 1N+1 , indicated by a high entropy value (see Fig. 7). InfoNCE will degenerate to a simple contrastive loss, i.e. Lsimple = −Za · Zb + 1N+1 ∑N i=0 Za · Zi , which repulses every negative sample with an equal force. In contrast, a relative smaller temperature will give more relative weight, i.e. larger λ, to negative samples that are more similar to the anchor (Za).
The influence of the temperature on the covariance and accuracy is shown in Fig. 7 (b) and (c). We observe that a higher temperature tends to decrease the effect of de-correlation, indicated by a higher covariance value, which also leads to a performance drop. This verifies our hypothesis regarding on how re in InfoNCE achieves de-correlation because a large temperature causes more balanced weights λi, which is found to alleviate the effect of de-correlation. For the setup, we note that the encoder is trained for 200 epochs with the default setting in Solo-learn for the SimCLR framework.
A.8 THEORETICAL DERIVATION FOR A SINGLE BIAS LAYER
With the cosine similarity loss defined as Eq 7 Eq 8:
cossim(a, b) = a · b√ a2 · b2 , (7)
for which the derived gradient on the vector a is shown as
∂
∂a cossim(a, b) = b1 |a| · |b| − cossim(a, b) · a1 |a|2 . (8)
The above equation is used as a prior for our following derivations. As indicated in the main manuscript, the encoder output za is l2-normalized before feeding into the predictor, thus pa = Za + bp, bp denotes the bias layer in the predictor. The cosine similarity loss (ignoring the symmetry for simplicity) is shown as
Lcosine = −Pa ·Zb = − pa ||pa|| · zb ‖zb‖
(9)
The gradient on pa is derived as
−∂Lcosine ∂pa = zb ‖zb‖ · ‖pa‖ − cossim(Za,Zb) · pa ||pa||2
= 1
‖pa‖ ( zb ‖zb‖ − cossim(Za,Zb) · Pa )
= 1
‖pa‖
( Zb − cossim(Za,Zb) ·
Za + bp ‖pa‖ ) = 1
‖pa‖
( (oz + rb)− cossim(Za,Zb)
‖pa‖ · (oz + ra + bp) ) = 1
‖pa‖ ((oz + rb)−m · (oz + ra + bp))
= 1
‖pa‖ ((1−m)oz −mbp + rb −m · ra) ,
(10)
where m = cossim(Za,Zb)‖pa‖ .
Given that pa = Za + bp, the negative gradient on bp is the same as that on pa as
−∂Lcosine ∂bp = −∂Lcosine ∂pa
= 1
‖pa‖ ((1−m)oz −mbp + rb −m · ra) .
(11)
We assume that the training is stable and the bias layer converges to a certain value when −∂cossim(Za,Zb)∂bp = 0. Thus, the converged bp satisfies the following constraint:
1
‖pa‖ ((1−m)oz −mbp + rb −mra)) = 0
bp = 1−m m oz + 1 m rb − ra.
(12)
With a batch of samples, the average of 1mrb and ra is expected to be close to 0 by the definition of residual vector. Thus, the bias layer vector is expected to converge to:
bp = 1−m m oz. (13)
Rational behind the high similarity between bp and oz . The above theoretical derivation shows that the parameters in the bias layer are excepted to converge to a vector 1−mm oz . This theoretical derivation justifies why the empirically observed cosine similarity between bp and oz is as high as 0.99. Ideally, it should be 1, however, such a small deviation is expected with the training dynamics taken into account.
Rational behind how a single bias layer prevents collapse. Given that pa = Za+bp, the negative gradient on Za is shown as
−∂Lcosine ∂Za = −∂Lcosine ∂pa
= 1
‖pa‖
( Zb − cossim(Za,Zb) ·
Za + bp ‖pa‖ ) = 1
‖pa‖ Zb −
cossim(Za,Zb)
‖pa‖2 Za −
cossim(Za,Zb)
‖pa‖2 bp.
(14)
Here, we highlight that since the loss −Za ·Za = −1 is a constant having zero gradients on the encoder,− cossim(Za,Zb)‖pa‖2 Za can be seen as a dummy term. Considering Eq 13 andm = cossim(Za,Zb) ‖pa‖ ,
we have b = ( ‖pa‖cossim(Za,Zb) − 1)oz . The above equation is equivalent to
−∂Lcosine ∂Za = 1 ‖pa‖ Zb − cossim(Za,Zb) ‖pa‖2 bp
= 1
‖pa‖ Zb −
cossim(Za,Zb)
‖pa‖2 ( ‖pa‖ cossim(Za,Zb) − 1)oz
= 1
‖pa‖ Zb −
1 ‖pa‖ (1− cossim(Za,Zb) ‖pa‖ )oz
∝ Zb − (1− cossim(Za,Zb)
‖pa‖ )oz.
(15)
With Zb as the basic gradient, the extra gradient component Ge = −(1− cossim(Za,Zb)‖pa‖ )oz . Given that pa = Za +bp and ‖Za‖ = 1, thus ‖pa‖ < 1 only when Za is negatively correlated with bp. In practice, however, Za and bp are often positively correlated to some extent due to their shared center vector component. In other words, ‖pa‖ > 1. Moreover, cossim(Za,Zb) is smaller than 1, thus −(1− cossim(Za,Zb)‖pa‖ ) < 0, suggesting Ge consists of negative oz with the effect of de-centerization. This above derivation justifies the rationale why a single bias layer can help alleviate collapse.
B DISCUSSION: DOES BN HELP AVOID COLLAPSE?
To our knowledge, our work is the first to revisit and refute the explanatory claims in (Chen & He, 2021). Several works, however, have attempted to demystify the success of BYOL (Grill et al., 2020), a close variant of SimSiam. The success has been ascribed to BN in (Fetterman & Albrecht, 2020), however, (Richemond et al., 2020) refutes their claim. Since the role of intermediate BNs is ascribed to stabilize training (Richemond et al., 2020; Chen & He, 2021), we only discuss the final BN in the SimSiam encoder. Note that with our Conjecture1, the final BN that removes the mean of representation vector is supposed to have de-centering effect. BY default SimSiam has such a BN at the end of its encoder, however, it still collapses with the predictor and stop gradient. Why would such a BN not prevent collapse in this case? Interestingly, we observe that such BN can help alleviate collapse with a simple MSE loss (see Fig. 9), however, its performance is is inferior to the cosine loss-based SimSiam (with predictor and stop gradient) due to the lack of the de-correlation effect in SimSiam. Note that the cosine loss is in essence equivalent to a MSE loss on the l2normalized vectors. This phenomenon can be interpreted as that the l2-normalization causes another mean after the BN removes it. Thus, with such l2-normalization in the MSE loss, i.e. adopting the default cosine loss, it is important to remove the oe from the optimization target. The results with the loss of −Za · sg(Zb + oe) in Table 3 show that this indeed prevents collapse and verifies the above interpretation. | 1. What is the focus of the review on the SimSiam approach in self-supervised learning?
2. What are the strengths of the analysis in section 2 of the review?
3. What are the weaknesses of the presentation and empirical discussions in sections 3.3, 3.4, and 3.5 of the review?
4. Is there any question regarding the need for 'de-centeralization' in gradient signals of SSL for stable learning?
5. Does the paper provide sufficient theoretical explanations and proof for its conjectures?
6. Are there any questions regarding the conceptual explanation of de-correlation in Info-NCE?
7. Do you have any further questions or concerns about the content of the review? | Summary Of The Paper
Review | Summary Of The Paper
The paper analyzes how the self-supervised learning (SSL) approach SimSiam avoids collapsed representations without explicit formulation of repulsive sample relations. To this end, first flaws in the original reasoning of the SimSiam paper are revealed. Next, based on center-residual vector decomposition, the role of the prediction head for preventing representation collapse in SimSiam is analyzed. Results indicate the importance of de-centralization and de-correlation as driving concepts for stable SSL.
Review
Strengths:
Recent successful approaches to SSL indicate that stable learning can be achieved without explicit incorporation of repulsive, negative image relations. Indeed, the underlying reasons are theoretically poorly understood and typically only an inuitive understanding presented. Hence, given the importance of contrastive learnig today, research in this direction is important.
The analysis in section 2 seems sound and raises questions about the original insights of the SimSiam work regarding representation collapse.
Weaknesses:
The presentation and outline of arguments, as well as empirical discussions especially in section 3.3, 3.4 and 3.5 are sometimes hard to follow.
The need of ‘de-centeralization’ (i.e. pushing samples apart in the embedding space) in gradient signals of SSL for stable learning seems straight-forward and inuitively trivial and does not seem novel.
The paper only empirically shows that some kind of repulsion (i.e. de-centralization) is implicitly happening in the SimSiam framework. However, no theoretical and clear explanation of why and how is presented. The presented conjectures are insufficiently back-uped and proven beyond the intuition of implicit repulsion. The paper e.g. states “Since the original predictor involves a nonlinear MLP, it is hard to understand how the predictor actually achieves it” (Sec. 4). Thus, no real, novel insights about the learning mechanisms of SSL without negatives are provided which are required for a more complete understanding of the addressed problem.
The conceptual explanation of de-correlation in Info-NCE is hard is fuzzy and does not sound convincing. A more solid derivation and formulation of the hypothesis is needed.
Questions:
In the paragraph ‘Predictor with stop-gradient is asymmetric’ the paper states that the model setup shown in Fig. 2 (a) acutally results in successful, stable learning (“Similarly, […] Fig2 (a) […] leads to success, [...]”) It seems no prediction head is used in this case. Does the stop-gradient operation itself already prevent representation collapse? |
ICLR | Title
How Does SimSiam Avoid Collapse Without Negative Samples? A Unified Understanding with Self-supervised Contrastive Learning
Abstract
To avoid collapse in self-supervised learning (SSL), a contrastive loss is widely used but often requires a large number of negative samples. Without negative samples yet achieving competitive performance, a recent work (Chen & He, 2021) has attracted significant attention for providing a minimalist simple Siamese (SimSiam) method to avoid collapse. However, the reason for how it avoids collapse without negative samples remains not fully clear and our investigation starts by revisiting the explanatory claims in the original SimSiam. After refuting their claims, we introduce vector decomposition for analyzing the collapse based on the gradient analysis of the l2-normalized representation vector. This yields a unified perspective on how negative samples and SimSiam alleviate collapse. Such a unified perspective comes timely for understanding the recent progress in SSL.
1 INTRODUCTION
Beyond the success of NLP (Lan et al., 2020; Radford et al., 2019; Devlin et al., 2019; Su et al., 2020; Nie et al., 2020), self-supervised learning (SSL) has also shown its potential in the field of vision tasks (Li et al., 2021; Chen et al., 2021; El-Nouby et al., 2021). Without the ground-truth label, the core of most SSL methods lies in learning an encoder with augmentation-invariant representation (Bachman et al., 2019; He et al., 2020; Chen et al., 2020a; Caron et al., 2020; Grill et al., 2020). Specifically, they often minimize the representation distance between two positive samples, i.e. two augmented views of the same image, based on a Siamese network architecture (Bromley et al., 1993). It is widely known that for such Siamese networks there exists a degenerate solution, i.e. all outputs “collapsing” to an undesired constant (Chen et al., 2020a; Chen & He, 2021). Early works have attributed the collapse to lacking a repulsive component in the optimization goal and adopted contrastive learning (CL) with negative samples, i.e. views of different samples, to alleviate this problem. Introducing momentum into the target encoder, BYOL shows that Siamese architectures can be trained with only positive pairs. More recently, SimSiam (Chen & He, 2021) has caught great attention by further simplifying BYOL by removing the momentum encoder, which has been seen as a major milestone achievement in SSL for providing a minimalist method for achieving competitive performance. However, more investigation is required for the following question:
How does SimSiam avoid collapse without negative samples?
Our investigation starts with revisiting the explanatory claims in the original SimSiam paper (Chen & He, 2021). Notably, two components, i.e. stop gradient and predictor, are essential for the success of SimSiam (Chen & He, 2021). The reason has been mainly attributed to the stop gradient (Chen & He, 2021) by hypothesizing that it implicitly involves two sets of variables and SimSiam behaves like alternating between optimizing each set. Chen & He argue that the predictor h is helpful in SimSiam because h fills the gap to approximate expectation over augmentations (EOA).
Unfortunately, the above explanatory claims are found to be flawed due to reversing the two paths with and without gradient (see Sec. 2.2). This motivates us to find an alternative explanation, for which we introduce a simple yet intuitive framework for facilitating the analysis of collapse in SSL.
∗equal contribution
Specifically, we propose to decompose a representation vector into center and residual components. This decomposition facilitates understanding which gradient component is beneficial for avoiding collapse. Under this framework, we show that a basic Siamese architecture cannot prevent collapse, for which an extra gradient component needs to be introduced. With SimSiam interpreted as processing the optimization target with an inverse predictor, the analysis of its extra gradient shows that (a) its center vector helps prevent collapse via the de-centering effect; (b) its residual vector achieves dimensional de-correlation which also alleviates collapse.
Moreover, under the same gradient decomposition, we find that the extra gradient caused by negative samples in InfoNCE (He et al., 2019; Chen et al., 2020b;a; Tian et al., 2019; Khosla et al., 2020) also achieves de-centering and de-correlation in the same manner. It contributes to a unified understanding on various frameworks in SSL, which also inspires the investigation of hardnessawareness Wang & Liu (2021) from the inter-anchor perspective Zhang et al. (2022) for further bridging the gap between CL and non-CL frameworks in SSL. Finally, simplifying the predictor for more explainable SimSiam, we show that a single bias layer is sufficient for preventing collapse.
The basic experimental settings for our analysis are detailed in Appendix A.1 with a more specific setup discussed in the context. Overall, our work is the first attempt for performing a comprehensive study on how SimSiam avoids collapse without negative samples. Several works, however, have attempted to demystify the success of BYOL (Grill et al., 2020), a close variant of SimSiam. A technical report (Fetterman & Albrecht, 2020) has suggested the importance of batch normalization (BN) in BYOL for its success, however, a recent work (Richemond et al., 2020) refutes their claim by showing BYOL works without BN, which is discussed in Appendix B.
2 REVISITING SIMSIAM AND ITS EXPLANATORY CLAIMS
l2-normalized vector and optimization goal. SSL trains an encoder f for learning discriminative representation and we denote such representation as a vector z, i.e. f(x) = z where x is a certain input. For the augmentation-invariant representation, a straightforward goal is to minimize the distance between the representations of two positive samples, i.e. augmented views of the same image, for which mean squared error (MSE) is a default choice. To avoid scale ambiguity, the vectors are often l2-normalized, i.e. Z = z/||z|| (Chen & He, 2021), before calculating the MSE:
LMSE = (Za −Zb)2/2− 1 = −Za ·Zb = Lcosine, (1) which shows the equivalence of a normalized MSE loss to the cosine loss (Grill et al., 2020).
Collapse in SSL and solution of SimSiam. Based on a Siamese architecture, the loss in Eq 1 causes the collapse, i.e. f always outputs a constant regardless of the input variance. We refer to this Siamese architecture with loss Eq 1 as Naive Siamese in the remainder of paper. Contrastive loss with negative samples is a widely used solution (Chen et al., 2020a). Without using negative samples, SimSiam solves the collapse problem via predictor and stop gradient, based on which the encoder is optimized with a symmetric loss:
LSimSiam = −(Pa · sg(Zb) + Pb · sg(Za)), (2) where sg(·) is stop gradient and P is the output of predictor h, i.e. p = h(z) and P = p/||p||.
2.1 REVISING EXPLANATORY CLAIMS IN SIMSIAM
Interpreting stop gradient as AO. Chen & He hypothesize that the stop gradient in Eq 2 is an implementation of Alternating between the Optimization of two sub-problems, which is denoted as AO. Specifically, with the loss considered as L(θ, η) = Ex,T [∥∥Fθ(T (x)) − ηx∥∥2], the optimization objective minθ,η L(θ, η) can be solved by alternating ηt ← arg minη L(θt, η) and θt+1 ← arg minθ L(θ, ηt). It is acknowledged that this hypothesis does not fully explain why the collapse is prevented (Chen & He, 2021). Nonetheless, they mainly attribute SimSiam success to the stop gradient with the interpretation that AO might make it difficult to approach a constant ∀x. Interpreting predictor as EOA. The AO problem (Chen & He, 2021) is formulated independent of predictor h, for which they believe that the usage of predictor h is related to approximating EOA for filling the gap of ignoring ET [·] in a sub-problem of AO. The approximation of ET [·] is summarized
in Appendix A.2. Chen & He support their interpretation by proof-of-concept experiments. Specifically, they show that updating ηx with a moving-average ηtx ← m ∗ ηtx + (1 −m) ∗ Fθt(T ′(x)) can help prevent collapse without predictor (see Fig. 1 (b)). Given that the training completely fails when the predictor and moving average are both removed, at first sight, their reasoning seems valid.
2.2 DOES THE PREDICTOR FILL THE GAP TO APPROXIMATE EOA?
Reasoning flaw. Considering the stop gradient, we divide the framework into two sub-models with different paths and term them Gradient Path (GP) and Stop Gradient Path (SGP). For SimSiam, only the sub-model with GP includes the predictor (see Fig. 1 (a)). We point out that their reasoning flaw of predictor analysis lies in the reverse of GP and SGP. By default, the moving-average sub-model, as shown in Fig. 1 (b), is on the same side as SGP. Note that Fig. 1 (b) is conceptually similar to Fig. 1 (c) instead of Fig. 1 (a). It is worth mentioning that the Mirror SimSiam in Fig. 1 (c) is what stop gradient in the original SimSiam avoids. Therefore, it is problematic to perceive h as EOA.
New Figure 1
Explicit EOA does not prevent collapse. (Chen & He, 2021) points out that “in practice, it would be unrealistic to actually compute the expectation ET [·]. But it may be possible for a neural network (e.g., the preditor h) to learn to predict the expectation, while the sampling of T is implicitly distributed across multiple epochs.” If implicitly sampling across multiple epochs is beneficial, explicitly sampling sufficient large N augmentations in a batch with the latest model would be more beneficial to approximate ET [·]. However, Table 1 shows that the collapse still occurs and suggests that the equivalence between predictor and EOA does not hold.
2.3 ASYMMETRIC INTERPRETATION OF PREDICTOR WITH STOP GRADIENT IN SIMSIAM
Symmetric Predictor does not prevent collapse. The difference between Naive Siamese and Simsiam lies in whether the gradient in backward propagation flows through a predictor, however, we show that this propagation helps avoid collapse only when the predictor is not included in the SGP path. With h being trained the same as Eq 2, we optimize the encoder f through replacing the Z in Eq 2 with P . The results in Table. 2 show that it still leads to collapse. Actually, this is well expected by perceiving h to be part of the new encoder F , i.e. p = F (x) = h(f(x)). In other words, the symmetric architectures with and without predictor h both lead to collapse.
Predictor with stop gradient is asymmetric. Clearly, how SimSiam avoids collapse lies in its asymmetric architecture, i.e. one path with h and the other without h. Under this asymmetric architecture, the role of stop gradient is to only allow the path with predictor to be optimized with the encoder output as the target, not vice versa. In other words, the SimSiam avoids collapse by excluding Mirror SimSiam (Fig. 1 (c)) which has a loss (mirror-like Eq 2) asLMirror = −(Pa ·Zb+Pb ·Za), where stop gradient is put on the input of h, i.e. pa = h(sg[za]) and pb = h(sg[zb]).
Predictor vs. inverse predictor. We interpret h as a function mapping from z to p, and introduce a conceptual inverse mapping h−1, i.e. z = h−1(p). Here, as shown in Table 2, SimSiam with symmetric predictor (Fig. 2 (b)) leads to collapse, while SimSiam (Fig. 1 (a)) avoids collapse. With the conceptual h−1, we interpret Fig. 1 (a) the same as Fig. 2 (c) which differs from Fig. 2 (b) via changing the optimization target from pb to zb, i.e. zb = h−1(pb). This interpretation
suggests that the collapse can be avoided by processing the optimization target with h−1. By contrast, Fig. 1 (c) and Fig. 2 (a) both lead to collapse, suggesting that processing the optimization target with h is not beneficial for preventing collapse. Overall, asymmetry alone does not guarantee collapse avoidance, which requires the optimization target to be processed by h−1 not h.
Trainable inverse predictor and its implication on EOA. In the above, we propose a conceptual inverse predictor h−1 in Fig. 2 (c), however, it remains yet unknown whether such an inverse predictor is experimentally trainable. A detailed setup for this investigation is reported in Appendix A.5. The results in Fig. 3 show that a learnable h−1 leads to slightly inferior performance, which is expected because h−1 cannot make the trainable inverse predictor output z∗b completely the same as zb. Note that it would be equivalent to SimSiam if z∗b = zb. Despite a slight performance drop, the results confirm that h−1 is trainable. The fact that h−1 is trainable provides additional evidence that the role h plays in SimSiam is not EOA
because theoretically h−1 cannot restore a random augmentation T ′ from an expectation p, where p = h(z) = ET [ Fθt(T (x)) ] .
3 VECTOR DECOMPOSITION FOR UNDERSTANDING COLLAPSE
By default, InfoNCE (Chen et al., 2020a) and SimSiam (Chen & He, 2021) both adopt l2normalization in their loss for avoiding scale ambiguity. We treat the l2-normalized vector, i.e. Z, as the encoder output, which significantly simplifies gradient derivation and the following analysis.
Vector decomposition. For the purpose of analysis, we propose to decompose Z into two parts, Z = o + r, where o, r denote center vector and residual vector respectively. Specifically, the center vector o is defined as an average of Z over the whole representation space oz = E[Z]. However, we approximate it with all vectors in current mini-batch, i.e. oz = 1M ∑M m=1 Zm, where M is the mini-batch size. We define the residual vector r as the residual part of Z, i.e. r = Z − oz .
3.1 COLLAPSE FROM THE VECTOR PERSPECTIVE
Collapse: from result to cause. A Naive Siamese is well expected to collapse since the loss is designed to minimize the distance between positive samples, for which a constant constitutes an optimal solution to minimize such loss. When the collapse occurs, ∀i,Zi = 1M ∑M m=1 Zm = oz , where i denotes a random sample index, which shows the constant vector is oz in this case. This interpretation only suggests a possibility that a dominant o can be one of the viable solutions, while the optimization, such as SimSiam, might still lead to a non-collapse solution. This merely describes o as the consequence of the collapse, and our work investigates the cause of such collapse through analyzing the influence of individual gradient components, i.e. o and r during training.
Competition between o and r. Complementary to the Standard Deviation (Std) (Chen & He, 2021), for indicating collapse, we introduce the ratio of o in z, i.e. mo = ||o||/||z||, where || ∗ || is the L2 norm. Similarly, the ratio of r in z is defined as mr = ||r||/||z||. When collapse happens, i.e. all vectors Z are close to the center vector o, mo approaches 1 and mr approaches 0, which is not desirable for SSL. A desirable case would be a relatively small mo and a relatively large mr, suggesting a relatively small (large) contribution of o (r) in each Z. We interpret the cause of collapse as a competition between o and r where o dominates over r, i.e. mo mr. For Eq 1, the derived negative gradient on Za (ignoring Zb for simplicity due to symmetry) is shown as:
Gcosine = − ∂LMSE ∂Za = Zb −Za ⇐⇒ − ∂Lcosine ∂Za = Zb, (3)
where the gradient component Za is a dummy term because the loss −Za · Za = −1 is a constant having zero gradient on the encoder f .
Conjecture1. With Za = oz + ra, we conjecture that the gradient component of oz is expected to update the encoder to boost the center vector thus increasemo, while the gradient component of ra is expected to behave in the opposite direction to increase mr. A random gradient component is expected to have a relatively small influence.
To verify the above conjecture, we revisit the dummy gradient term Za. We design loss −Za · sg(oz) and −Za · sg(Za − oz) to show the influence of gradient component o and ra respectively. The results in Fig. 4 show that the gradient component oz has the effect of increasingmo while decreasingmr. On the
contrary, ra helps increase mr while decreasing mo. Overall, the results verify Conjecture1.
3.2 EXTRA GRADIENT COMPONENT FOR ALLEVIATING COLLAPSE
Revisit collapse in a symmetric architecture. Based on Conjecture1, here, we provide an intuitive interpretation on why a symmetric Siamese architecture, such as Fig. 2 (a) and (b), cannot be trained without collapse. Take Fig. 2 (a) as example, the gradient in Eq 3 can be interpreted as two equivalent forms, from which we choose Zb−Za = (oz+rb)−(oz+ra) = rb−ra. Since rb comes from the same positive sample as ra, it is expected that rb also increases mr, however, this effect is expected to be smaller than that of ra, thus causing collapse.
Basic gradient and Extra gradient components. The negative gradient on Za in Fig. 2 (a) is derived as Zb, while that on Pa in Fig. 2 (b) is derived as Pb. We perceive Zb and Pb in these basic Siamese architectures as the Basic Gradient. Our above interpretation shows that such basic components cannot prevent collapse, for which an Extra Gradient component, denoted as Ge, needs to be introduced to break the symmetry. As the term suggests, Ge is defined as a gradient term that is relative to the basic gradient in a basic Siamese architecture. For example, negative samples can be introduced to Naive Siamese (Fig. 2 (a)) for preventing collapse, where the extra gradient caused by negative samples can thus be perceived as Ge with Zb as the basic gradient. Similarly, we can also disentangle the negative gradient on Pa in SimSiam (Fig. 1 (a)), i.e. Zb, into a basic gradient (which is Pb) and Ge which is derived as Zb −Pb (note that Zb = Pb + Ge). We analyze how Ge prevents collapse via studying the independent roles of its center vector oe and residual vector re.
3.3 A TOY EXAMPLE EXPERIMENT WITH NEGATIVE SAMPLE
Which repulsive component helps avoid collapse? Existing works often attribute the collapse in Naive Siamese to lacking a repulsive part during the optimization. This explanation has motivated previous works to adopt contrastive learning, i.e. attracting the positive samples while repulsing the negative samples. We experiment with a simple triplet loss1, Ltri = −Za·sg(Zb −Zn), where Zn indicates the representation of a Negative sample. The derived negative gradient on Za is Zb −Zn, where Zb is the basic gradient component and thus Ge = −Zn in this setup. For a sample representation, what determines it as a positive sample for attracting or a negative sample for repulsing is the residual component, thus it might be tempting to interpret that re is the key component of repulsive part that avoids the collapse. However, the results in Table 3 show that the component beneficial for preventing collapse inside Ge is oe instead of re. Specifically, to explore the individual influence of oe and re in the Ge, we design two experiments by removing one component while keeping the other one. In the first experiment, we remove the re in Ge while keeping the oe. By contrast, the oe is removed while keeping the re in the second experiment. In contrast to what existing explanations may expect, we find that the residual component oe prevents collapses. With Conjecture1, a gradient component alleviates collapse if it has negative center vector. In this setup, oe = −oz , thus oe has the de-centering role for preventing collapse. On the contrary, re does not prevent collapse and keeping re even decreases the performance (36.21% < 47.41%). Since the negative sample is randomly chosen, re just behaves like a random noise on the optimization to decrease performance.
3.4 DECOMPOSED GRADIENT ANALYSIS IN SIMSIAM
It is challenging to derive the gradient on the encoder output in SimSiam due to a nonlinear MLP module in h. The negative gradient on Pa for LSimSiam in Eq 2 can be derived as
GSimSiam = − ∂LSimSiam
∂Pa = Zb = Pb + (Zb − Pb) = Pb + Ge, (4)
oe re Collapse Top-1 (%) X X × 66.62 X × × 48.08 × X × 66.15 × × X 1
Table 4: Gradient component analysis for SimSiam.
where Ge indicates the aforementioned extra gradient component. To investigate the influence of oe and re on the collapse, similar to the analysis with the toy example experiment in Sec. 3.3, we design the experiment by removing one component while keeping the other. The results are reported in Table 4. As expected, the model collapses when both components in Ge are removed and the best performance is achieved when both components are kept. Interestingly, the model does not collapse when
either oe or re is kept. To start, we analyze how oe affects the collapse based on Conjecture1.
How oe alleviates collapse in SimSiam. Here, op is used to denote the center vector of P to differentiate from the above introduced oz for denoting that of Z. In this setup Ge = Zb − Pb, thus the residual gradient component is derived to be oe = oz − op. With Conjecture1, it is well expected that oe helps prevent collapse if oe contains negative op since the analyzed vector is Pa. To determine the amount of component of op existing in oe, we measure the cosine similarity between oe − ηpop and op for a wide range of ηp. The results in Fig. 5 (a) show that their cosine similarity is zero when ηp is around −0.5, suggesting oe has ≈ −0.5op. With Conjecture1, this negative ηp explains why SimSiam avoids collapse from the perspective of de-centering.
How oe causes collapse in Mirror SimSiam. As mentioned above, the collapse occurs in Mirror SimSiam, which can also be explained by analyzing its oe. Here, oe = op − oz , for which we evaluate the amount of component oz existing in oe via reporting the similarity between oe − ηzoz
1Note that the triplet loss here does not have clipping form as in Schroff et al. (2015) for simplicity.
and oz . The results in Fig. 5 (a) show that their cosine similarity is zero when ηz is set to around 0.2. This positive ηz explains why Fig. 1(c) causes collapse from the perspective of de-centering.
Overall, we find that processing the optimization target with h−1, as in Fig. 2 (c), alleviates collapse (ηp ≈ −0.5), while processing it with h, as in Fig. 1(c), actually strengthens the collapse (ηz ≈ 0.2). In other words, via the analysis of oe, our results help explain how SimSiam avoids collapse as well as how Mirror SimSiam causes collapse from a straightforward de-centering perspective.
Relation to prior works. Motivated from preventing the collapse to a constant, multiple prior works, such as W-MSE (Ermolov et al., 2021), Barlow-twins (Zbontar et al., 2021), DINO (Caron et al., 2021), explicitly adopt de-centering to prevent collapse. Despite various motivations, we find that they all implicitly introduce an oe that contains a negative center vector. The success of their approaches aligns well with our Conjecture1 as well as our above empirical results. Based on our findings, we argue that the effect of de-centering can be perceived as oe having a negative center vector. With this interpretation, we are the first to demonstrate that how SimSiam with predictor and stop gradient avoids collapse can be explained from the perspective of de-centering.
Beyond de-centering for avoiding collapse. In the toy example experiment in Sec. 3.3, re is found to be not beneficial for preventing collapse and keeping re even decreases the performance. Interestingly, as shown in Table 4, we find that re alone is sufficient for preventing collapse and achieves comparable performance as Ge. This can be explained from the perspective of dimensional de-correlation, which will be discussed in Sec. 3.5.
3.5 DIMENSIONAL DE-CORRELATION HELPS PREVENT COLLAPSE
Conjecture2 and motivation. We conjecture that dimensional de-correlation increases mr for preventing collapse. The motivation is straightforward as follows. The dimensional correlation would be minimum if only a single dimension has a very high value for every individual class and the dimension changes for different classes. In another extreme case, when all the dimensions have the same values, equivalent to having a single dimension, which already collapses by itself in the sense of losing representation capacity. Conceptually, re has no direct influence on the center vector, thus we interpret that re prevents collapse through increasing mr.
To verify the above conjecture, we train SimSiam normally with the loss in Eq 2 and train for several epochs with the loss in Eq 1 for intentionally decreasing the mr to close to zero. Then, we train the loss with only a correlation regularization term, which is detailed in Appendix A.6. The results in Fig. 5 (b) show that this regularization term increases mr at a very fast rate.
Dimensional de-correlation in SimSiam. Assuming h only has a single FC layer to exclude the influence of oe, the weights in FC are expected to learn the correlation between different dimensions for the encoder output. This interpretation echos well with the finding that the eigenspace of hweight aligns well with that of correlation matrix (Tian et al., 2021). In essence, the h is trained to minimize the cosine similarity between h(za) and I(zb), where I is identity mapping. Thus, h that learns the correlation is optimized close to I , which is conceptually equivalent to optimizing with the goal of de-correlation for Z. As shown in Table 4, for SimSiam, re alone also prevents collapse, which
is attributed to the de-correlation effect since re has no de-centering effect. We observe from Fig. 6 that except in the first few epochs, SimSiam decreases the covariance during the whole training. Fig. 6 also reports the results for InfoNCE which will be discussed in Sec. 4.
4 TOWARDS A UNIFIED UNDERSTANDING OF RECENT PROGRESS IN SSL
De-centering and de-correlation in InfoNCE. InfoNCE loss is a default choice in multiple seminal contrastive learning frameworks (Sohn, 2016; Wu et al., 2018; Oord et al., 2018; Wang & Liu, 2021). The derived negative gradient of InfoNCE on Za is proportional to Zb + ∑N i=0−λiZi, where λi = exp(Za·Zi/τ)∑N i=0 exp(Za·Zi/τ) , and Z0 = Zb for notation simplicity. See Appendix A.7 for the detailed
derivation. The extra gradient component Ge = ∑N i=0−λiZi = −oz − ∑N i=0 λiri, for which
oe = −oz and re = − ∑N i=0 λiri. Clearly, oe contains negative oz as de-centering for avoiding collapse, which is equivalent to the toy example in Sec. 3.3 when the re is removed. Regarding re, the main difference between Ltri in the toy example and InfoNCE is that the latter exploits a batch of negative samples instead of a random one. λi is proportional to exp(Za·Zi), indicating that a large weight is put on the negative sample when it is more similar to the anchor Za, for which, intuitively, its dimensional values tend to have a high correlation with Za. Thus, re containing such negative representation with a high weight tends to decrease dimensional correlation. To verify this intuition, we measure the cosine similarity between re and the gradient on Za induced by a correlation regularization loss. The results in Fig. 5 (c) show that their gradient similarity is high for a wide range of temperature values, especially when τ is around 0.1 or 0.2, suggesting re achieves similar role as an explicit regularization loss for performing de-correlation. Replacing re with oe leads to a low cosine similarity, which is expected because oe has no de-correlation effect.
The results of InfoNCE in Fig. 6 resembles that of SimSiam in terms of the overall trend. For example, InfoNCE also decreases the covariance value during training. Moreover, we also report the results of InfoNCE where re is removed for excluding the de-correlation effect. Removing re from the InfoNCE loss leads to a high covariance value during the whole training. Removing re also leads to a significant performance drop, which echos with the finding in (Bardes et al., 2021) that dimensional de-correlation is essential for competitive performance. Regarding how re in InfoNCE achieves de-correlation, formally, we hypothesize that the de-correlation effect in InfoNCE arises from the biased weights (λi) on negative samples. This hypothesis is corroborated by the temperature analysis in Fig. 7. We find that a higher temperature makes the weight distribution of λi more balanced indicated a higher entropy of λi, which echos with the finding in (Wang & Liu, 2021). Moreover, we observe that a higher temperature also tends to increase the covariance value. Overall, with temperature as the control variable, we find that more balanced weights among negative samples decrease the de-correlation effect, which constitutes an evidence for our hypothesis.
Unifying SimSiam and InfoNCE. At first sight, there is no conceptual similarity between SimSiam and InfoNCE, and this is why the community is intrigued by the success of SimSiam without negative samples. Through decomposing the Ge into oe and re, we find that for both, their oe plays the role of de-centering and their re behaves like de-correlation. In this sense, we bring two seemingly irrelevant frameworks into a unified perspective with disentangled de-centering and de-correlation.
Beyond SimSiam and InfoNCE. In SSL, there is a trend of performing explicit manipulation of de-centering and de-correlation, for which W-MSE (Ermolov et al., 2021), Barlow-twins (Zbontar et al., 2021), DINO (Caron et al., 2021) are three representative works. They often achieve performance comparable to those with InfoNCE or SimSiam. Towards a unified understanding of recent progress in SSL, our work is most similar to a concurrent work (Bardes et al., 2021). Their work is mainly inspired by Barlow-twins (Zbontar et al., 2021) but decomposes its loss into three explicit components. By contrast, our work is motivated to answer the question of how SimSiam prevents
collapse without negative samples. Their work claims that variance component (equivalent to decentering) is an indispensable component for preventing collapse, while we find that de-correlation itself alleviates collapse. Overall, our work helps understand various frameworks in SSL from an unified perspective, which also inspires an investigation of inter-anchor hardness-awareness Zhang et al. (2022) for further bridging the gap between CL and non-CL frameworks in SSL.
5 TOWARDS SIMPLIFYING THE PREDICTOR IN SIMSIAM
Based on our understanding of how SimSiam prevents collapse, we demonstrate that simple components (instead of a non-linear MLP in SimSiam) in the predictor are sufficient for preventing collapse. For example, to achieve dimensional de-correlation, a single FC layer might be sufficient because a single FC layer can realize the interaction among various dimensions. On the other hand, to achieve de-centering, a single bias layer might be sufficient because a bias vector can represent the center vector. Attaching an l2-normalization layer at the end of the encoder, i.e. before the predictor, is found to be critical for achieving the above goal.
Pridictor with FC layers. To learn the dimensional correlation, an FC layer is sufficient theoretically but can be difficult to train in practice. Inspired by the property that Multiple FC layers make the training more stable even though they can be mathematically equivalent to a single FC layer (Bell-Kligler et al., 2019), we adopt two consecutive FC layers which are equivalent to removing the BN and ReLU in the original predictor.
The training can be made more stable if a Tanh layer is applied on the adopted single FC after every iteration. Table 5 shows that they achieve performance comparable to that with a non-linear MLP.
Predictor with a bias layer. A predictor with a single bias layer can be utilized for preventing collapse (see Table 5) and the trained bias vector is found to have a cosine similarity of 0.99 with the center vector (see Table 6). A bias in the MLP predictor also has a high cosine similarity of 0.89, suggesting that it is not a coincidence. A theoretical derivation for justifying such a
high similarity as well as how this single bias layer prevents collapse are discussed in Appendix A.8.
6 CONCLUSION
We point out a hidden flaw in prior works for explaining the success of SimSiam and propose to decompose the representation vector and analyze the decomposed components of extra gradient. We find that its center vector gradient helps prevent collapse via the de-centering effect and its residual gradient achieves de-correlation which also alleviates collapse. Our further analysis reveals that InfoNCE achieve the two effects in a similar manner, which bridges the gap between SimSiam and InfoNCE and contributes to a unified understanding of recent progress in SSL. Towards simplifying the predictor we have also found that a single bias layer is sufficient for preventing collapse.
ACKNOWLEDGEMENT
This work was partly supported by Institute for Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) under grant No.2019-001396 (Development of framework for analyzing, detecting, mitigating of bias in AI model and training data), No.2021-0-01381 (Development of Causal AI through Video Understanding and Reinforcement Learning, and Its Applications to Real Environments) and No.2021-0-02068 (Artificial Intelligence Innovation Hub). During the rebuttal, multiple anonymous reviewers provide valuable advice to significantly improve the quality of this work. Thank you all.
A APPENDIX
A.1 EXPERIMENTAL SETTINGS
Self-supervised encoder training: Below are the settings for self-supervised encoder training. For simplicity, we mainly use the default settings in a popular open library termed solo-learn (da Costa et al., 2021).
Data augmentation and normalization: We use a series of transformations including RandomResizedCrop with scale [0.2, 1.0], bicubic interpolation. ColorJitter (brightness (0.4), contrast (0.4), saturation (0.4), hue (0.1)) is randomly applied with the probability of 0.8. Random gray scale RandomGrayscale is applied with p = 0.2 Horizontal flip is applied with p = 0.5 The images are normalized with the mean (0.4914, 0.4822, 0.4465) and Std (0.247, 0.243, 0.261).
Network architecture and initialization: The backbone architecture is ResNet-18. The projection head contains three fully-connected (FC) layers followed by Batch Norm (BN) and ReLU, for which ReLU in the final FC layer is removed, i.e. FC1+BN+ReLU+FC2+BN+ReLU+FC3+BN . All projection FC layers have 2048 neurons for input, output as well as the hidden dimensions. The predictor head includes two FC layers as follows: FC1 + BN + ReLU + FC2. Input and output of the predictor both have the dimension of 2048, while the hidden dimension is 512. All layers of the network are by default initialized in Pytorch.
Optimizer: SGD optimizer is used for the encoder training. The batch size M is 256 and the learning rate is linearly scaled by the formula lr × M/256 with the base learning rate lr set to 0.5. The schedule for learning rate adopts the cosine decay as SimSiam. Momentum 0.9 and weight decay 1.0 × 10−5 are used for SGD. We use one GPU for each pre-training experiment. Following the practice of SimSiam, the learning rate of the predictor is fixed during the training. We use warmup training for the first 10 epochs. If not specified, by default we train the model for 1000 epochs.
Online linear evaluation: For the online linear revaluation, we also follow the practice in the solo-learn library (da Costa et al., 2021). The frozen features (2048 dimensions) from the training set are extracted (from the self-supervised pre-trained model) to feed into a linear classifier (1 FC layer with the input 2048 and output of 100). The test is performed on the validation set. The learning rate for the linear classifier is 0.1. Overall, we report Top-1 accuracy with the online linear evaluation in this work.
A.2 TWO SUB-PROBLEMS IN AO OF SIMSIAM
In the sub-problem ηt ← arg minη L(θt, η), ηt indicating latent representation of images at step t is actually obtained through ηtx ← ET [ Fθt(T (x)) ] , where they in practice ignore ET [·] and sample only one augmentation T ′, i.e. ηtx ← Fθt(T ′(x)). Conceptually, Chen & He equate the role of predictor to EOA.
A.3 EXPERIMENTAL DETAILS FOR EXPLICIT EOA IN TABLE 1
In the Moving average experiment, we follow the setting in SimSiam (Chen & He, 2021) without predictor. In the Same batch experiment, multiple augmentations, 10 augmentations for instance, are applied on the same image. With multi augmentations, we get the corresponding encoded representation, i.e. zi, i ∈ [1, 10]. We minimize the cosine distance between the first representation z1 and the average of the remaining vectors, i.e. z̄ = 19 ∑10 i=2 zi. The gradient stop is put on the averaged vector. We also experimented with letting the gradient backward through more augmentations, however, they consistently led to collapse.
A.4 EXPERIMENTAL SETUP AND RESULT TREND FOR TABLE 2.
Mirror SimSiam. Here we provide the pseudocode for Mirror SimSiam. In the Mirror SimSiam experiment which relates to Fig. 1 (c). Without taking symmetric loss into account, the pseudocode is shown in Algorithm 1. Taking symmetric loss into account, the pseudocode is shown in Algorithm 2.
Algorithm 1 Pytorch-like Pseudocode: Mirror SimSiam
# f: encoder (backbone + projector) # h: predictor
for x in loader: # load a minibatch x with n samples x_a, x_b = aug(x), aug(x) # augmentation z_a, z_b = f(x_a), f(x_b) # projections
p_b = h(z_b.detach()) # detach z_b but still allowing gradient p_b
L = D_cosine(z_a, p_b) # loss
L.backward() # back-propagate update(f, h) # SGD update
def D_cosine(z, p): # negative cosine similarity z = normalize(z, dim=1) # l2-normalize p = normalize(p, dim=1) # l2-normalize return -(z*p).sum(dim=1).mean()
Algorithm 2 Pytorch-like Pseudocode: Mirror SimSiam
# f: encoder (backbone + projector) # h: predictor
for x in loader: # load a minibatch x with n samples x_a, x_b = aug(x), aug(x) # augmentation z_a, z_b = f(x_a), f(x_b) # projections
p_b = h(z_b.detach()) # detach z_b but still allowing gradient p_b p_a = h(z_a.detach()) # detach z_a but still allowing gradient p_a
L = D_cosine(z_a, p_b)/2 + D_cosine(z_b, p_a)/2 # loss
L.backward() # back-propagate update(f, h) # SGD update
def D_cosine(z, p): # negative cosine similarity z = normalize(z, dim=1) # l2-normalize p = normalize(p, dim=1) # l2-normalize return -(z*p).sum(dim=1).mean()
Symmetric Predictor. To implement the SimSiam with Symmetric Predictor as in Fig. 2 (b), we can just perceive the predictor as part of the new encoder, for which the pseudocode is provided in Algorithm 3. Alternatively, we can additionally train the predictor similarly as that in SimSiam, for which the training involves two losses, one for training the predictor and another for training the new encoder (the corresponding pseudocode is provided in Algorithm 4). Moreover, for the second implementation, we also experiment with another variant that fixes the predictor while optimizing the new encoder and then train the predictor alternatingly. All of them lead to collapse with a similar trend as long as the symmetric predictor is used for training the encoder. For avoiding redundancy, in Fig. 8 we only report the result of the second implementation.
Result trend. The result trend of SimSiam, Naive Siamese, Mirror SimSiam, Symmetric Predictor are shown in Fig. 8. We observe that all architectures lead to collapse except for SimSiam. Mirroe SimSiam was stopped in the middle because a NaN value was returned from the loss.
A.5 EXPERIMENTAL DETAILS FOR INVERSE PREDICTOR.
In the inverse predictor experiment which relates to Fig. 2 (c), we introduce a new predictor which has the same structure as that of the original predictor. The training loss consists of 3 parts: predictor training loss, inverse predictor training and new encoder (old encoder+predictor) training. The new
Algorithm 3 Pytorch-like Pseudocode: Symmetric Predictor
# f: encoder (backbone + projector) # h: predictor
for x in loader: # load a minibatch x with n samples x_a, x_b = aug(x), aug(x) # augmentation z_a, z_b = f(x_a), f(x_b) # projections p_a, p_b = h(z_a), h(z_b) # predictions
L = D(p_a, p_b)/2 + D(p_b, p_a)/2 # loss
L.backward() # back-propagate update(f, h) # SGD update
def D(p, z): # negative cosine similarity z = z.detach() # stop gradient p = normalize(p, dim=1) # l2-normalize z = normalize(z, dim=1) # l2-normalize return -(p*z).sum(dim=1).mean()
Algorithm 4 Pytorch-like Pseudocode: Symmetric Predictor (with additional training on predictor)
# f: encoder (backbone + projector) # h: predictor
for x in loader: # load a minibatch x with n samples x_a, x_b = aug(x), aug(x) # augmentation z_a, z_b = f(x_a), f(x_b) # projections p_a, p_b = h(z_a), h(z_b) # predictions
d_p_a, d_p_b = h(z_a.detach()), h(z_b.detach()) # detached predictor output
# predictor training loss L_pred = D(d_p_a, z_b)/2 + D(d_p_b, z_a)/2
# encoder training loss L_enc = D(p_a, d_p_b)/2 + D(p_b, d_p_a)/2
L = L_pred + L_enc
L.backward() # back-propagate update(f, h) # SGD update
def D(p, z): # negative cosine similarity with detach on z z = z.detach() # stop gradient p = normalize(p, dim=1) # l2-normalize z = normalize(z, dim=1) # l2-normalize return -(p*z).sum(dim=1).mean()
encoder F consists of the old encoder f + predictor h. The practice of gradient stop needs to be considered in the implementation. We provide the pseudocode in Algorithm 5.
Algorithm 5 Pytorch-like Pseudocode: Trainable Inverse Predictor
# f: encoder (backbone + projector) # h: predictor # h_inv: inverse predictor
for x in loader: # load a minibatch x with n samples x_a, x_b = aug(x), aug(x) # augmentation z_a, z_b = f(x_a), f(x_b) # projections p_a, p_b = h(z_a), h(z_b) # predictions
d_p_a, d_p_b = h(z_a.detach()), h(z_b.detach()) # detached predictor output # predictor training loss L_pred = D(d_p_a, z_b)/2 + D(d_p_b, z_a)/2 # to train h
inv_p_a, inv_p_b = h_inv(p_a.detach()), h_inv(p_b.detach()) # to train h_inv # inverse predictor training loss L_inv_pred = D(inv_p_a, z_a)/2 + D(inv_p_b, z_b)/2
# encoder training loss L_enc = D(p_a, h_inv(p_b))/2 + D(p_b, h_inv(p_a))
L = L_pred + L_inv_pred + L_enc
L.backward() # back-propagate update(f, h, h_inv) # SGD update
def D(p, z): # negative cosine similarity with detach on z z = z.detach() # stop gradient p = normalize(p, dim=1) # l2-normalize z = normalize(z, dim=1) # l2-normalize return -(p*z).sum(dim=1).mean()
A.6 REGULARIZATION LOSS
Following Zbontar et al. (2021), we compute covariance regularization loss of encoder output along the mini-batch. The pseudocode for de-correlation loss calculation is put in Algorithm 6.
Algorithm 6 Pytorch-like Pseudocode: De-correlation loss
# Z_a: representation vector # N: batch size # D: the number of dimension for representation vector
Z_a = Z_a - Z_a.mean(dim=0)
cov = Z_a.T @ Z_a / (N-1) diag = torch.eye(D)
loss = cov[˜diag.bool()].pow_(2).sum() / D
A.7 GRADIENT DERIVATION AND TEMPERATURE ANALYSIS FOR INFONCE
With · indicating the cosine similarity between vectors, the InfoNCE loss can be expressed as
LInfoNCE = − log exp(Za ·Zb/τ) exp(Za ·Zb/τ) + ∑N i=1 exp(Za ·Zi/τ)
= − log exp(Za ·Zb/τ)∑N i=0 exp(Za ·Zi/τ) ,
(5)
where N indicates the number of negative samples and Z0 = Zb for simplifying the notation. By treating Za · Zi as the logit in a normal CE loss, we have the corresponding probability for each negative sample as λi =
exp(Za·Zi/τ)∑N i=0 exp(Za·Zi/τ) , where i = 0, 1, 2, ..., N and we have ∑N i=0 λi = 1.
The negative gradient of the InfoNCE on the representation Za is shown as
−∂LInfoNCE ∂Za = 1 τ (1− λ0)Zb − 1 τ N∑ i=1 λiZi
= 1
τ (Zb − N∑ i=0 λiZi)
= 1
τ (Zb − N∑ i=0 λi(oz + ri))
= 1
τ (Zb + (−oz − N∑ i=0 λiri)
∝ Zb + (−oz − N∑ i=0 λiri)
(6)
where 1τ can be adjusted through learning rate and is omitted for simple discussion. With Zb as the basic gradient, Ge = −oz − ∑N i=0 λiri, for which oe = −oz and re = − ∑N i=0 λiri. When the temperature is set to a large value, λi = exp(Za·Zi/τ)∑N i=0 exp(Za·Zi/τ)
, approaches 1N+1 , indicated by a high entropy value (see Fig. 7). InfoNCE will degenerate to a simple contrastive loss, i.e. Lsimple = −Za · Zb + 1N+1 ∑N i=0 Za · Zi , which repulses every negative sample with an equal force. In contrast, a relative smaller temperature will give more relative weight, i.e. larger λ, to negative samples that are more similar to the anchor (Za).
The influence of the temperature on the covariance and accuracy is shown in Fig. 7 (b) and (c). We observe that a higher temperature tends to decrease the effect of de-correlation, indicated by a higher covariance value, which also leads to a performance drop. This verifies our hypothesis regarding on how re in InfoNCE achieves de-correlation because a large temperature causes more balanced weights λi, which is found to alleviate the effect of de-correlation. For the setup, we note that the encoder is trained for 200 epochs with the default setting in Solo-learn for the SimCLR framework.
A.8 THEORETICAL DERIVATION FOR A SINGLE BIAS LAYER
With the cosine similarity loss defined as Eq 7 Eq 8:
cossim(a, b) = a · b√ a2 · b2 , (7)
for which the derived gradient on the vector a is shown as
∂
∂a cossim(a, b) = b1 |a| · |b| − cossim(a, b) · a1 |a|2 . (8)
The above equation is used as a prior for our following derivations. As indicated in the main manuscript, the encoder output za is l2-normalized before feeding into the predictor, thus pa = Za + bp, bp denotes the bias layer in the predictor. The cosine similarity loss (ignoring the symmetry for simplicity) is shown as
Lcosine = −Pa ·Zb = − pa ||pa|| · zb ‖zb‖
(9)
The gradient on pa is derived as
−∂Lcosine ∂pa = zb ‖zb‖ · ‖pa‖ − cossim(Za,Zb) · pa ||pa||2
= 1
‖pa‖ ( zb ‖zb‖ − cossim(Za,Zb) · Pa )
= 1
‖pa‖
( Zb − cossim(Za,Zb) ·
Za + bp ‖pa‖ ) = 1
‖pa‖
( (oz + rb)− cossim(Za,Zb)
‖pa‖ · (oz + ra + bp) ) = 1
‖pa‖ ((oz + rb)−m · (oz + ra + bp))
= 1
‖pa‖ ((1−m)oz −mbp + rb −m · ra) ,
(10)
where m = cossim(Za,Zb)‖pa‖ .
Given that pa = Za + bp, the negative gradient on bp is the same as that on pa as
−∂Lcosine ∂bp = −∂Lcosine ∂pa
= 1
‖pa‖ ((1−m)oz −mbp + rb −m · ra) .
(11)
We assume that the training is stable and the bias layer converges to a certain value when −∂cossim(Za,Zb)∂bp = 0. Thus, the converged bp satisfies the following constraint:
1
‖pa‖ ((1−m)oz −mbp + rb −mra)) = 0
bp = 1−m m oz + 1 m rb − ra.
(12)
With a batch of samples, the average of 1mrb and ra is expected to be close to 0 by the definition of residual vector. Thus, the bias layer vector is expected to converge to:
bp = 1−m m oz. (13)
Rational behind the high similarity between bp and oz . The above theoretical derivation shows that the parameters in the bias layer are excepted to converge to a vector 1−mm oz . This theoretical derivation justifies why the empirically observed cosine similarity between bp and oz is as high as 0.99. Ideally, it should be 1, however, such a small deviation is expected with the training dynamics taken into account.
Rational behind how a single bias layer prevents collapse. Given that pa = Za+bp, the negative gradient on Za is shown as
−∂Lcosine ∂Za = −∂Lcosine ∂pa
= 1
‖pa‖
( Zb − cossim(Za,Zb) ·
Za + bp ‖pa‖ ) = 1
‖pa‖ Zb −
cossim(Za,Zb)
‖pa‖2 Za −
cossim(Za,Zb)
‖pa‖2 bp.
(14)
Here, we highlight that since the loss −Za ·Za = −1 is a constant having zero gradients on the encoder,− cossim(Za,Zb)‖pa‖2 Za can be seen as a dummy term. Considering Eq 13 andm = cossim(Za,Zb) ‖pa‖ ,
we have b = ( ‖pa‖cossim(Za,Zb) − 1)oz . The above equation is equivalent to
−∂Lcosine ∂Za = 1 ‖pa‖ Zb − cossim(Za,Zb) ‖pa‖2 bp
= 1
‖pa‖ Zb −
cossim(Za,Zb)
‖pa‖2 ( ‖pa‖ cossim(Za,Zb) − 1)oz
= 1
‖pa‖ Zb −
1 ‖pa‖ (1− cossim(Za,Zb) ‖pa‖ )oz
∝ Zb − (1− cossim(Za,Zb)
‖pa‖ )oz.
(15)
With Zb as the basic gradient, the extra gradient component Ge = −(1− cossim(Za,Zb)‖pa‖ )oz . Given that pa = Za +bp and ‖Za‖ = 1, thus ‖pa‖ < 1 only when Za is negatively correlated with bp. In practice, however, Za and bp are often positively correlated to some extent due to their shared center vector component. In other words, ‖pa‖ > 1. Moreover, cossim(Za,Zb) is smaller than 1, thus −(1− cossim(Za,Zb)‖pa‖ ) < 0, suggesting Ge consists of negative oz with the effect of de-centerization. This above derivation justifies the rationale why a single bias layer can help alleviate collapse.
B DISCUSSION: DOES BN HELP AVOID COLLAPSE?
To our knowledge, our work is the first to revisit and refute the explanatory claims in (Chen & He, 2021). Several works, however, have attempted to demystify the success of BYOL (Grill et al., 2020), a close variant of SimSiam. The success has been ascribed to BN in (Fetterman & Albrecht, 2020), however, (Richemond et al., 2020) refutes their claim. Since the role of intermediate BNs is ascribed to stabilize training (Richemond et al., 2020; Chen & He, 2021), we only discuss the final BN in the SimSiam encoder. Note that with our Conjecture1, the final BN that removes the mean of representation vector is supposed to have de-centering effect. BY default SimSiam has such a BN at the end of its encoder, however, it still collapses with the predictor and stop gradient. Why would such a BN not prevent collapse in this case? Interestingly, we observe that such BN can help alleviate collapse with a simple MSE loss (see Fig. 9), however, its performance is is inferior to the cosine loss-based SimSiam (with predictor and stop gradient) due to the lack of the de-correlation effect in SimSiam. Note that the cosine loss is in essence equivalent to a MSE loss on the l2normalized vectors. This phenomenon can be interpreted as that the l2-normalization causes another mean after the BN removes it. Thus, with such l2-normalization in the MSE loss, i.e. adopting the default cosine loss, it is important to remove the oe from the optimization target. The results with the loss of −Za · sg(Zb + oe) in Table 3 show that this indeed prevents collapse and verifies the above interpretation. | 1. What is the focus of the paper in terms of its contribution to the field of study?
2. What are the strengths of the proposed approach, particularly in comparison to other methods?
3. Are there any concerns or limitations regarding the method's ability to avoid collapse?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Can the reviewer think of any potential applications or future directions related to the research presented in the paper? | Summary Of The Paper
Review | Summary Of The Paper
This method aims to explore why a minimalist simple Siamese (SimSiam) method can avoid collapse. After refuting their claims, the authors introduce vector decomposition for analyzing the collapse based on the gradient analysis of l2 normalized vector, yielding a unified perspective on how negative samples and SimSiam predictor alleviate collapse.
Review
This paper is generally well-written and well-structured. It is important and inspiring to explore the reason why SimSiam can avoid collapse. The authors provide a convincing explanation and reach a unified conclusion for the recent progress in SSL, which is very insightful. The theoretical analysis and experimental results are solid. |
ICLR | Title
Semi-Supervised Segmentation-Guided Tumor-Aware Generative Adversarial Network for Multi-Modality Brain Tumor Translation
Abstract
Multi-modality brain tumor images are widely used for clinical diagnosis since they can provide complementary information. Yet, due to considerations such as time, cost, and artifacts, it is difficult to get fully paired multi-modality images. Therefore, most of the brain tumor images are modality-missing in practice and only a few are labeled, due to a large amount of expert knowledge required. To tackle this problem, multi-modality brain tumor image translation has been extensively studied. However, existing works often lead to tumor deformation or distortion because they only focus on the whole image. In this paper, we propose a semi-supervised segmentation-guided tumor-aware generative adversarial network called STAGAN , which utilizes unpaired brain tumor images with few paired and labeled ones to learn an end-to-end mapping from source modality to target modality. Specifically, we train a semi-supervised segmentation network to get pseudo labels, which aims to help the model focus on the local brain tumor areas. The model can synthesize more realistic images using pseudo tumor labels as additional information to help the global translation. Experiments show that our model achieves competitive results on both quantitative and qualitative evaluations. We also verify the effectiveness of the generated images via the downstream segmentation tasks.
N/A
Multi-modality brain tumor images are widely used for clinical diagnosis since they can provide complementary information. Yet, due to considerations such as time, cost, and artifacts, it is difficult to get fully paired multi-modality images. Therefore, most of the brain tumor images are modality-missing in practice and only a few are labeled, due to a large amount of expert knowledge required. To tackle this problem, multi-modality brain tumor image translation has been extensively studied. However, existing works often lead to tumor deformation or distortion because they only focus on the whole image. In this paper, we propose a semi-supervised segmentation-guided tumor-aware generative adversarial network called S3TAGAN , which utilizes unpaired brain tumor images with few paired and labeled ones to learn an end-to-end mapping from source modality to target modality. Specifically, we train a semi-supervised segmentation network to get pseudo labels, which aims to help the model focus on the local brain tumor areas. The model can synthesize more realistic images using pseudo tumor labels as additional information to help the global translation. Experiments show that our model achieves competitive results on both quantitative and qualitative evaluations. We also verify the effectiveness of the generated images via the downstream segmentation tasks.
1 INTRODUCTION
Multi-modality medical images are widely used in various tasks such as clinical detection. There are different kinds of imaging technologies in practice. For example, magnetic resonance imaging (MRI) is a common and noninvasive imaging technique. With the help of an additional magnetic field, MRI can determine the nucleus types of a certain part of the human body and then generate structural images with high resolution.
MRI is further divided into several modalities, such as T1-weighted (T1), T1-with-contrastenhanced (T1ce), T2-weighted (T2), and T2-fluid-attenuated inversion recovery (Flair). Each modality of imaging can show complement lesion information from different angles. In Flair images, the cerebrospinal fluid shows hypointense signals while the lesions containing water appear as hyperintense signals. In T1 images, the cerebrospinal fluid is hypointense which tends to be black, while the gray matter is gray and the white matter is bright. Therefore, T1 images can present the anatomical structure, which is convenient for diagnoses. T1ce can show the structures and the edges of the tumors, which is convenient to observe the morphology of different types of tumors. T2 can better display the lesions because the brightness in the edema site is higher. Obviously, fully paired multi-modality images help doctors to make diagnoses more accurately.
The benefits of using multi-modality images to assist medical analysis have been widely recognized. However, due to the consideration of time, cost, artifacts, and other practical factors, physicians often get some of the modalities for examination in practice. In other words, most of the images are modality-missing, which has an adverse impact on the accuracy of physicians’ diagnoses. If we can generate the corresponding missing modalities of given images by image translation, physicians can get more comprehensive information for diagnoses.
Many existing methods for multi-modality image translation based on deep learning have achieved good results in natural images. However, when applied to medical images, especially brain tumor images, the results are often unsatisfactory. Compared with two-dimensional natural images, threedimensional medical images have more structural information. Moreover, due to the privacy of patients, a large number of medical images collected by different institutions are private, which increases the difficulty of model training. In addition, the hierarchical structures of brain tumors are complex and irregular, which leads to blur or deformation in image translation. Therefore, the translation of brain tumor images has always been a challenge in the field of medical image translation.
To solve the problem of local distortion or blur in brain tumor image translation, we propose to use pseudo-labels generated by a segmentation network to guide the translation. The model contains a global branch and a local branch. For a given source image of arbitrary modality, we first put it into the segmentation network to get pseudo labels of three kinds of tumors, whole tumor, tumor core and enhancing tumor. Then the source image is inputted into the global branch and the dot product of it and the three pseudo labels are inputted into the local branch. In this way, the translation network can focus on the different parts of tumors. Since the training data are mostly unpaired and only few of them are labeled and paired, we train the segmentation network by the semi-supervised method proposed in CPS(Chen et al., 2021b). For paired images with Ground Truth, we use L1 loss for further constraint. The segmentation network and the translation network are trained at the same time to promote each other. Furthermore, in order to make our model applicable to images of arbitrary modality, similar to StarGAN(Choi et al., 2018), the discriminator tries to not only distinguish whether the images are real or fake, but also judge the modality which they belong to. In this way, we do not need to train a segmentation network for each modality, but only need to train a unified model to solve all cases.
In this way, we achieve an end-to-end translation, which means that given brain tumor images of arbitrary modality with both the source and target modality vectors, the model can directly output the final target images without any other manual intervention. We name our model as Semi-Supervised Segmentation-guided Tumor-Aware Generative Adversarial Network(S3TAGAN ).
In summary, the main contributions of this paper are as follows:
• We propose a Semi-Supervised Segmentation-Guided Tumor-Aware Generative Adversarial Network, named S3TAGAN , which is guided by different parts of tumors and improves the translation effectiveness using unpaired brain tumor images with few paired and labeled ones. We also propose a local consistency loss to preserve the anatomical structure of the tumors.
• We show qualitative and quantitative results in the multi-modality translation task on the BRATS 2020 dataset. Our model achieves better results compared with the state-of-the-art methods. We also verify the quality of the generated images through downstream segmentation tasks.
2 RELATED WORKS
Cross-modality image translation has been intensively studied in recent years. For instance, Pix2pix(Isola et al., 2017) provides a solution to generate images from the given source modality to the given target modality based on cGAN(Mirza & Osindero, 2014). However, it requires paired data for training, which is hard to realize. Therefore, how to achieve unsupervised image translation by utilizing unpaired data has attracted the interest of many researchers. CycleGAN(Zhu et al., 2017) and DiscoGAN(Kim et al., 2017) propose a cycle consistency loss, which attempts to preserve the crucial information of the images. By constraining the reconstructed image and the source image, the model is available to translate images between the two given modalities with unpaired data. UNIT(Liu et al., 2017) believes that the essence of image translation tasks lies in calculating the joint distribution by utilizing the edge distribution of images in two known domains. Since there may be infinite joint distributions corresponding to two marginal distributions, some additional assumptions must be added. UNIT assumes that the two modalities share the same latent space, and proposes to combine VAE and GAN to form a more robust generative model. The encoder maps the images of different domains to the same distribution to obtain the latent code, and then the decoder maps the
latent code back to the image domain. However, the images generated by the above models do not have style diversity. For a given image, the generated target modal image is unique. In order to solve this problem, MUNIT(Huang et al., 2018) and DRIT(Lee et al., 2018) disentangle the latent code into the content code which is shared by different modalities and the style code which is unique for different modalities and restricted to normal distribution. In this way, the style code can be obtained by style encoder or sampling, so that the image of the target modalities can be various.
However, the above models can only translate images between two modalities. If we want to translate images between n modalities, we need to train the model for n(n − 1)/2 times. In order to perform in a unified model to translate multi-modality images, StarGAN(Choi et al., 2018) proposed a single generator to learn the mapping between any two given modalities. The source images and mask vectors are inputted to the generator which then outputs the generated target images. The discriminator needs to not only distinguish whether the images are real or false, but also classified the domain they belong to. Since every mask vector is corresponding to a given condition, the generated images are simplex without style diversity. StarGAN v2(Choi et al., 2020) uses a variable style code to replace the mask vectors on this basis, and the generated target images of each modality have different styles. DRTI++(Lee et al., 2020) also adds domain code for translation so that any target modality images can be generated by a unified generator.
Although the above model can achieve multi-modality translation, it can not focus on local targets but only on the whole image. (Zhang et al., 2018b) propose that for unsupervised learning, cycle consistency loss will easily lead to local deformation of the image if there are no other constraints. InstaGAN(Mo et al., 2018) proposed to add segmentation labels of local instances as additional input information so that the network will pay more attention to the shape of the local instances in the training process and reduces the deformation. DUNIT(Bhattacharjee et al., 2020) and INIT(Shen et al., 2019) respectively propose to use object detection and segmentation to assist translation. EaGANs(Yu et al., 2019) proposes to integrate edge maps that contain critical textural information to boost synthesis quality. TC-MGAN(Xin et al., 2020) introduces a multi-modality tumor consistency
loss to preserve the critical tumor information in the target-generated images but it can only translate the images from the T2 modality to other MR modalities. TarGAN(Chen et al., 2021a) can focus on the target area by using a segmentation network but it gets dissatisfactory results on brain tumor datasets. While these models can translate images more effectively, they also require more supervised information.
Some of the above methods can only translate images between two given modalities, and some require paired and labeled data for training, which is not completely consistent with the practical application scenarios that most data are unpaired. We propose S3TAGAN to learn an end-to-end mapping from an arbitrary source modality to the given target modality, which can focus on the local tumor areas and translate better by using unpaired images with few paired and labeled ones.
3 METHOD
In this section, we first describe our framework and the pipeline of our approach, then we define the training objective functions.
3.1 FRAMEWORK AND PIPELINE
Given an image Is from the source modality, we first put it into the segmentation network to get three pseudo labels, whole tumor, tumor core and enhancing tumor. Then we multiply the source image with three pseudo labels respectively to get the source tumor images Ts which only contains different tumor areas. Given the source modality vector s and an arbitrary target modality vector t, we aim to train a generator that can translate the source whole image Is and the source tumor images Ts to the target whole image It and the target tumor images Tt. The mapping is denoted as: (It, Tt) = G(Is, Ts, s, t). Note that the segmentation network is only required during the training process, only Is,s and t are used during the inference process, which is denoted as: It = G(Is, s, t). The framework of the model is shown in Figure 1.
Generator. The generator is comprised of two encoder-decoder pairs, one for the global branch and the other for the local branch. The global decoder receives the feature encoded by the global encoder and generates the target whole image It while the local decoder receives the features from both the global encoder and the local encoder to generate the target tumor images Tt. The generator translates the target whole image It and its corresponding tumor images T ′t to the reconstructed whole image I ′s and tumor images T ′ s. In this way, a cycle training process is accomplished.
Discriminator. We use two discriminators to distinguish the reality of images and the modality they belong to. The discriminator Dg is responsible for the whole images in the global branch and the discriminator Dl is responsible for the tumor images in the local branch respectively. Segmentation network. Given the image and its corresponding modality vector, the segmentation network generates three pseudo labels of the three kinds of tumors, which are binary masks that represent the foreground and background of the tumors. Then we calculate the tumor images by the dot product of the whole image and three pseudo labels. Taking source image Is and its modality vector s for example, the mapping is denoted as: Ts = Is ∗ S(Is, s). On account of the poor proportion of labeled data, we train the segmentation network by a semi-supervised method proposed in CPS(Chen et al., 2021b). The generated target image It is also inputted into the segmentation network similar to Is, which is denoted as T ′t = It ∗ S(It, t).
3.2 TRAINING OBJECTIVE FUNCTIONS
Adversarial loss. Adversarial loss can make the images generated by the generator more realistic to confuse the discriminator. The traditional adversarial losses for the global branch and the local branch are defined as follows:
Ladvg = EIs [logD src g (Is)] + EIt [log(1−Dsrcg (It))], (1)
Ladvl = ETs [logD src l (Ts)] + ETt [log(1−Dsrcl (Tt))]. (2)
Taking the global branch for example to explain, Dsrcg (Is) represents the probability that the discriminator considers the images Is as real, and Dsrcg (It) represents the probability that the discriminator considers the generated images as real. In order to correctly distinguish the reality of the
images, the discriminator aims to minimize Dsrcg (It) and D src l (Tt), while the generator, on the other hand, aims to maximize these terms to confuse the discriminator.
Since the traditional adversarial loss may lead to unstable adversarial learning, WGAN(Arjovsky et al., 2017) proposes a new adversarial loss, which solves the problem of instability of the training process in GAN, reduces the problem of mode collapse to a large extent, and ensures the diversity of generated samples. WGAN-GP(Gulrajani et al., 2017) proposes to use a gradient penalty strategy instead of the weight clipping strategy in WGAN, which makes the training process in GAN more stable and improves the quality of generated images. The final adversarial losses are shown as follows:
LadvDg =λgpEα,s,Is [(∥∇D src g (αIs + (1− α)It)∥2 − 1)2] − EIs [Dsrcg (Is)], (3) LadvDl =λgpEα,s,Ts [(∥∇D src l (αTs + (1− α)Tt)∥2 − 1)2]
− ETs [Dsrcl (Ts)], (4) LadvGg = EIs,s[D src g (It)], (5)
LadvGl = ETs,s[D src l (Tt)], (6)
where λgp is set as 1, α is set as a random number whose range is between [0, 1] and subject to a uniform distribution in this paper.
Modality classification loss. Given an image and its target modality vector, we hope the generator can generate images that are as close to the target modality as possible. Similar to StarGAN, the discriminator aims to judge the modality they belong to. The difference is we add an extra discriminator for the local branch. For real images, we define the modality classification loss as follows:
Lr clsDg = EIs [−logD cls g (s|Is)], (7)
Lr clsDl = ETs [−logD cls l (s|Ts)], (8)
where the term Dclsg (s|Is) represents a probability distribution over the modality vector for the whole images and the term Dclsl (s|Ts) represents the one for the tumor images. Similarly, we define the modality classification loss for fake images as follows:
Lf clsDg = EIs,s[−logD cls g (t|It)], (9)
Lf clsDl = ETs,s[−logD cls l (t|Tt)]. (10)
Local consistency loss. Tt represents the generated tumor images and T ′t represents the tumor areas of the generated whole image It. Since we hope the segmentation network can better guide the translation of the global branch, we constrain the similarity of these two to alleviate the problem of distortion in brain tumor image translation. Similarly, the reconstructed tumor image T ′s and the source tumor images Ts are supposed to be similar. We propose a local consistency loss as an extra constraint to improve the translation effect, which is defined as follows:
Llocal = E([∥Tt − T ′t∥1]) + E([∥Ts − T ′s∥1]). (11) Reconstruct loss. The model can translate the source image Is to the image It of any modality. However, this does not guarantee that the generated image It just simply changes the image style information and still contains all the content information of the source image Is. To solve this problem, we input It into the translation network for cycle translation to obtain the reconstructed image I ′s. If I ′ s is consistent with Is, the content information is not missed during translation. The reconstruct loss is defined as follows:
Lrec = E[∥Is − I ′s∥1]. (12)
Identity mapping loss. Given an image of arbitrary modality, if the target modality happens to be its source modality, we denote the mapping as Iidt, Tidt = G(Is, Ts, s, s). We hope the generated images to be as consistent as possible with the source images. We use identity mapping loss to enforce the generated image not to lose origin information, which is defined as follows:
Lidt = E[∥Is − Iidt∥1 + E[∥Ts − Tidt∥1]. (13)
Semi-supervised loss. For images that are paired and labeled, we further use ground truth to constrain the generated tumor images to alleviate the problem of local image deformation. We defined the semi-supervised loss as follows:
Lss = E[∥Tt −GT∥1], (14) where GT represents the ground truth of the target tumor images.
Total loss.Combining all the losses mentioned above, we finally defined the objective function as follows:
LD =λ adv Dg L adv Dg + λ adv Dl LadvDl + λ cls DgL r cls Dg
+ λclsDlL r cls Dl , (15)
LG =λ adv Gg L adv Gg + λ adv Gl LadvGl + λ cls DgL fcls Dg
+ λclsDlL fcls Dl + λrecLrec + λlocalLlocal
+ λidtLidt + λssLss, (16) where λadvDg ,λ adv Dl ,λclsDg ,λ cls Dl
,λrec, λlocal, λidt,λss are hyper-parameters to balance to losses. We set λadvDg ,λ adv Dl ,λclsDg ,λ cls Dl to be 1.0 and λrec, λlocal, λidt,λss to be 10.0 in this paper.
4 EXPERIMENTS
4.1 SETTINGS
Datasets. We conduct all our experiments on BRATS2020(Menze et al., 2014)(Bakas et al., 2017)(Bakas et al., 2018) dataset. BRATS2020 provides brain tumor images of four modalities: T1-weighted (T1), T1-with-contrast-enhanced (T1ce), T2-weighted (T2) and T2-fluid-attenuated inversion recovery (Flair). Three kinds of tumors which are named Whole Tumor(WT), Tumor Core(TC) and Enhancing Tumor(ET), are labeled for segmentation. We use 150 patients’ images as the training samples and 20 percent of them are treated as labeled and paired ones. We resize the images to 128*128. Details are shown in the supplementary material.
Evaluation metrics. For the translation task, we use structural similarity index measure(SSIM), peak-signal-noise ratio(PSNR) and learned perceptual image patch similarity(LPIPS)(Zhang et al., 2018a) to measure the similarity between the generated images and ground truth. For the downstream segmentation task, we use DICE to measure the integrity of the predicted pseudo labels which are generated by nnU-Net(Isensee et al., 2021). That’s because nnU-Net is an acknowledged method that achieves state-of-the-art performances for medical image segmentation.
Baselines. We compare our translation results with StarGAN(Choi et al., 2018), DRIT++(Lee et al., 2018), Targan(Chen et al., 2021a) and ReMIC(Shen et al., 2020). StarGAN proposes to use a unified model to translate images to arbitrary modalities. DRIT++ disentangles an image to the content code and the attribute code during the training process and generates images by using the content code extracted from the input images and the attribute code sampled from the standard normal distribution. TarGAN alleviates the problem of image deformation on the target area by utilizing an extra shape controller. ReMIC generates images by using multi-modality paired images. Note that we implement a semi-supervised ReMIC for comparison.
Implementation details. We implement PatchGAN(Isola et al., 2017) as the backbone for both the global discriminator and local discriminator. And we use U-net as the backbone for the generator and the segmentation network. We train our model for 100 epochs with a learning rate of 10−4 for the generator and both the discriminators for the first 50 epochs and linearly decay the learning rate to 10−6 at the final epoch. Adam(Kingma & Ba, 2014) optimizer is used with momentum parameters β1 = 0.9 and β2 = 0.999. We also adopt data augmentation and normalization for the training samples. Details are shown in the supplementary material. All the experiments are conducted on PyTorch with NVIDIA RTX 3090.
4.2 RESULTS
In this section, we demonstrate our translation results compared with other baseline methods. Then we verify the effectiveness of the generated images via the downstream segmentation tasks.
4.2.1 TRANSLATION RESULTS
Qualitative evaluation. Figure 2 shows the qualitative results of our model and the other baselines. StarGAN and DRIT++ generate images with checkerboard artifacts in some cases, while ReMIC and TarGAN may lead to blur or deformation in the tumor areas. Our method generates images with clearer textures and more structural information.
Quantitative evaluation. We use SSIM, PSNR and LPIPS to measure the similarity between the generated images and ground truth.
As shown in Table 1, our method gets higher SSIM and PSNR than the other baselines. The value is the average of all the cases for the mapping of any source modality to an arbitrary target modality. We also use the smallest rectangle to frame the tumor areas for every generated image and calculate the SSIM and PSNR of the framed images with their corresponding ground truth, which is denoted as local SSIM and local PSNR. These two metrics represent the translation effectiveness in the tumor areas. S3TAGAN get higher scores of them, which means that our method preserves more information in the tumor areas. LPIPS is also a metric to measure perceptual similarity. The
lower value of this metric means higher similarity which represents that our model achieves better translation effectiveness.
4.2.2 DOWNSTREAM SEGMENTATION RESULTS
Given an image from an arbitrary modality, we translate it to the other three modalities by our model and all the baselines respectively. Then we put the fully multi-modality images generated by the above methods into nnU-Net to compare their segmentation effectiveness. We use DICE of whole tumor(WT), tumor core(TC) and enhancing tumor(ET) to measure the results. As shown in Table 2, our method achieves better performance than all the baselines, which also represents that we generate more accurate information on the tumor areas.
4.3 EFFECTIVENESS OF LOCAL BRANCH
In this section, we conduct an ablation study to validate the effectiveness of the local branch which is guided by the segmentation network in our proposed S3TAGAN . We replace the predicted pseudo labels with the following three situations: (a)ground truth labels. (b) full-zeros maps. (c) random maps that each value is either zero or one. As shown in Table 3, the performance of S3TAGAN with labels is the upper bound in theory, which represents the best guidance for local tumor translation. Note that the performance of segmentation guidance is close to this situation, which demonstrates the effectiveness of our model. While S3TAGAN with zeros maps is the lower bound in theory, which represents no segmentation-guided learning. S3TAGAN with random maps represents learning with guidance for random areas but not the tumor areas. Translation effectiveness is improved slightly in this situation. The results show the robustness of our model.
4.4 RATIO OF PAIRED AND LABELED DATA
In order to test the effect of the ratio of paired and labeled data for semi-supervised learning on our model, we adjust the ratio to 10 percent and 100 percent. As shown in Table 4, more paired and labeled data for supervision can improve translation effectiveness. Note that the effectiveness of 20 percent supervision which is our default setting is close to 100 percent supervision.
5 CONCLUSION
In this paper, we propose a semi-supervised segmentation-guided method called S3TAGAN to translate brain tumor images, which learns a mapping between any two modalities. We use unpaired images for training with only few paired and labeled ones, which is in agreement with the practical situation. With the guidance of the segmentation network, the local branch focuses on the brain tumor areas and alleviates the problem of deformation in the tumor areas, which benefits the quality of both the generated whole images and tumor images. Experiments demonstrate that our model achieves better translation effectiveness with strong robustness. The results of the downstream segmentation task also verify the effectiveness of our model. | 1. What is the focus of the paper, and what are the authors' main contributions?
2. What are the strengths and weaknesses of the proposed method, particularly regarding its accuracy and potential usefulness to the medical community?
3. How does the reviewer assess the evaluation setup and results presented in the paper? Are there any important details missing or issues with the figures?
4. What additional quantitative and qualitative results should be added to improve the paper's clarity and reproducibility?
5. How does the reviewer evaluate the paper's novelty, and how does it compare to other works in the field? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The authors present S^3TAGAN, a method for image translation of brain tumor images from one modality of MR images to another. The authors evaluate their method on the BRATS2020 dataset, consisting of 150 images of four MRI modalities.
Strengths And Weaknesses
Strengths:
The authors have indeed identified an interesting problem, and designing an algorithm that advances can achieve this tasks would be of great interest, if the authors can accomplish this.
It is good that the authors consider not just the tasks of reconstruction but also of segmentation. This gives further support for promise of their work.
The ablation study on the effect of numbered of paired labels is good, though it is strange that there is not much difference between 10% an 100%.
Weaknesses:
Fundamentally, I am doubtful of the utility of such a method. If the method were highly accurate, then yes, I expect, it would be of great interest to the medical community. But how accurate must this be? The authors show the relative improvement of their method over other baselines, but they do not provide a sense of what accuracy needs to be attained to be useful to the medical community. They do not give a sense of the potential promise of their method of obtaining that goal. (The LPIPS is a good surrogate measure for this, but still, the authors do not provide a sense of what range of values must be obtained to be of any interest to the medical community.)
The evaluation currently has some weaknesses. In particular, (1) it does not provide some important details about the set up and (2) some further quantitative and qualitative results should be added.
As for (1), the authors do not state how many patients (or samples) there are total in the dataset and how many are used testing, and also, how many of the training dataset are used for cross-validation and parameter tuning? In looking at the BRATS2020 website, it was hard to ascertain the information of the total dataset size.
Also, in Fig. 2, the authors provide an "input" image and then the output of the algorithm, but should there not be an "output" image to compare against? Or have the authors mislabeled this row of the figure? There should be two types of images given for the translation problem, but only one is shown. Also, a difference image would help here, to highlight the error between the reconstruction and ground truth. As it is, it is hard to compare the images of different algorithms visually.
As to (2), the authors should consider adding other segmentation metrics. First of all, for segmentation, there is first the issue of identification of objects to be segmented (measured in the BRATS2020 competition by sensitivity and specificity). Why do the authors not provide these measures? For segmentation accuracy itself, there are other metrics beyond DICE, such as the Hausdorff distance, which is also used in the BRATS2020 competition. DICE measures overlap of the two segmentation masks, whereas Hausdorff measures the similarity of the contours of the segmentation boundaries. If the authors believe that the DICE metric alone is relevant for this task, they should argue this point, since in general, segmentation tasks consider multiple metrics.
Clarity, Quality, Novelty And Reproducibility
Overall, the paper is reasonably easy to follow. The grammar could be improved in some places, but it is not prohibitive from understanding the paper.
There are some questions about the evaluation (see above), which are not clearly explained and therefore would hinder reproducibility.
As an aside, the red boxes are helpful, but still unclear in what specifically they are trying to highlight about the image. Precise manual annotations, even if only for a few images, would be helpful to give the reader a sense of what in the tumor area is most important to accurately reconstruct.
The novelty is moderate. |
ICLR | Title
Semi-Supervised Segmentation-Guided Tumor-Aware Generative Adversarial Network for Multi-Modality Brain Tumor Translation
Abstract
Multi-modality brain tumor images are widely used for clinical diagnosis since they can provide complementary information. Yet, due to considerations such as time, cost, and artifacts, it is difficult to get fully paired multi-modality images. Therefore, most of the brain tumor images are modality-missing in practice and only a few are labeled, due to a large amount of expert knowledge required. To tackle this problem, multi-modality brain tumor image translation has been extensively studied. However, existing works often lead to tumor deformation or distortion because they only focus on the whole image. In this paper, we propose a semi-supervised segmentation-guided tumor-aware generative adversarial network called STAGAN , which utilizes unpaired brain tumor images with few paired and labeled ones to learn an end-to-end mapping from source modality to target modality. Specifically, we train a semi-supervised segmentation network to get pseudo labels, which aims to help the model focus on the local brain tumor areas. The model can synthesize more realistic images using pseudo tumor labels as additional information to help the global translation. Experiments show that our model achieves competitive results on both quantitative and qualitative evaluations. We also verify the effectiveness of the generated images via the downstream segmentation tasks.
N/A
Multi-modality brain tumor images are widely used for clinical diagnosis since they can provide complementary information. Yet, due to considerations such as time, cost, and artifacts, it is difficult to get fully paired multi-modality images. Therefore, most of the brain tumor images are modality-missing in practice and only a few are labeled, due to a large amount of expert knowledge required. To tackle this problem, multi-modality brain tumor image translation has been extensively studied. However, existing works often lead to tumor deformation or distortion because they only focus on the whole image. In this paper, we propose a semi-supervised segmentation-guided tumor-aware generative adversarial network called S3TAGAN , which utilizes unpaired brain tumor images with few paired and labeled ones to learn an end-to-end mapping from source modality to target modality. Specifically, we train a semi-supervised segmentation network to get pseudo labels, which aims to help the model focus on the local brain tumor areas. The model can synthesize more realistic images using pseudo tumor labels as additional information to help the global translation. Experiments show that our model achieves competitive results on both quantitative and qualitative evaluations. We also verify the effectiveness of the generated images via the downstream segmentation tasks.
1 INTRODUCTION
Multi-modality medical images are widely used in various tasks such as clinical detection. There are different kinds of imaging technologies in practice. For example, magnetic resonance imaging (MRI) is a common and noninvasive imaging technique. With the help of an additional magnetic field, MRI can determine the nucleus types of a certain part of the human body and then generate structural images with high resolution.
MRI is further divided into several modalities, such as T1-weighted (T1), T1-with-contrastenhanced (T1ce), T2-weighted (T2), and T2-fluid-attenuated inversion recovery (Flair). Each modality of imaging can show complement lesion information from different angles. In Flair images, the cerebrospinal fluid shows hypointense signals while the lesions containing water appear as hyperintense signals. In T1 images, the cerebrospinal fluid is hypointense which tends to be black, while the gray matter is gray and the white matter is bright. Therefore, T1 images can present the anatomical structure, which is convenient for diagnoses. T1ce can show the structures and the edges of the tumors, which is convenient to observe the morphology of different types of tumors. T2 can better display the lesions because the brightness in the edema site is higher. Obviously, fully paired multi-modality images help doctors to make diagnoses more accurately.
The benefits of using multi-modality images to assist medical analysis have been widely recognized. However, due to the consideration of time, cost, artifacts, and other practical factors, physicians often get some of the modalities for examination in practice. In other words, most of the images are modality-missing, which has an adverse impact on the accuracy of physicians’ diagnoses. If we can generate the corresponding missing modalities of given images by image translation, physicians can get more comprehensive information for diagnoses.
Many existing methods for multi-modality image translation based on deep learning have achieved good results in natural images. However, when applied to medical images, especially brain tumor images, the results are often unsatisfactory. Compared with two-dimensional natural images, threedimensional medical images have more structural information. Moreover, due to the privacy of patients, a large number of medical images collected by different institutions are private, which increases the difficulty of model training. In addition, the hierarchical structures of brain tumors are complex and irregular, which leads to blur or deformation in image translation. Therefore, the translation of brain tumor images has always been a challenge in the field of medical image translation.
To solve the problem of local distortion or blur in brain tumor image translation, we propose to use pseudo-labels generated by a segmentation network to guide the translation. The model contains a global branch and a local branch. For a given source image of arbitrary modality, we first put it into the segmentation network to get pseudo labels of three kinds of tumors, whole tumor, tumor core and enhancing tumor. Then the source image is inputted into the global branch and the dot product of it and the three pseudo labels are inputted into the local branch. In this way, the translation network can focus on the different parts of tumors. Since the training data are mostly unpaired and only few of them are labeled and paired, we train the segmentation network by the semi-supervised method proposed in CPS(Chen et al., 2021b). For paired images with Ground Truth, we use L1 loss for further constraint. The segmentation network and the translation network are trained at the same time to promote each other. Furthermore, in order to make our model applicable to images of arbitrary modality, similar to StarGAN(Choi et al., 2018), the discriminator tries to not only distinguish whether the images are real or fake, but also judge the modality which they belong to. In this way, we do not need to train a segmentation network for each modality, but only need to train a unified model to solve all cases.
In this way, we achieve an end-to-end translation, which means that given brain tumor images of arbitrary modality with both the source and target modality vectors, the model can directly output the final target images without any other manual intervention. We name our model as Semi-Supervised Segmentation-guided Tumor-Aware Generative Adversarial Network(S3TAGAN ).
In summary, the main contributions of this paper are as follows:
• We propose a Semi-Supervised Segmentation-Guided Tumor-Aware Generative Adversarial Network, named S3TAGAN , which is guided by different parts of tumors and improves the translation effectiveness using unpaired brain tumor images with few paired and labeled ones. We also propose a local consistency loss to preserve the anatomical structure of the tumors.
• We show qualitative and quantitative results in the multi-modality translation task on the BRATS 2020 dataset. Our model achieves better results compared with the state-of-the-art methods. We also verify the quality of the generated images through downstream segmentation tasks.
2 RELATED WORKS
Cross-modality image translation has been intensively studied in recent years. For instance, Pix2pix(Isola et al., 2017) provides a solution to generate images from the given source modality to the given target modality based on cGAN(Mirza & Osindero, 2014). However, it requires paired data for training, which is hard to realize. Therefore, how to achieve unsupervised image translation by utilizing unpaired data has attracted the interest of many researchers. CycleGAN(Zhu et al., 2017) and DiscoGAN(Kim et al., 2017) propose a cycle consistency loss, which attempts to preserve the crucial information of the images. By constraining the reconstructed image and the source image, the model is available to translate images between the two given modalities with unpaired data. UNIT(Liu et al., 2017) believes that the essence of image translation tasks lies in calculating the joint distribution by utilizing the edge distribution of images in two known domains. Since there may be infinite joint distributions corresponding to two marginal distributions, some additional assumptions must be added. UNIT assumes that the two modalities share the same latent space, and proposes to combine VAE and GAN to form a more robust generative model. The encoder maps the images of different domains to the same distribution to obtain the latent code, and then the decoder maps the
latent code back to the image domain. However, the images generated by the above models do not have style diversity. For a given image, the generated target modal image is unique. In order to solve this problem, MUNIT(Huang et al., 2018) and DRIT(Lee et al., 2018) disentangle the latent code into the content code which is shared by different modalities and the style code which is unique for different modalities and restricted to normal distribution. In this way, the style code can be obtained by style encoder or sampling, so that the image of the target modalities can be various.
However, the above models can only translate images between two modalities. If we want to translate images between n modalities, we need to train the model for n(n − 1)/2 times. In order to perform in a unified model to translate multi-modality images, StarGAN(Choi et al., 2018) proposed a single generator to learn the mapping between any two given modalities. The source images and mask vectors are inputted to the generator which then outputs the generated target images. The discriminator needs to not only distinguish whether the images are real or false, but also classified the domain they belong to. Since every mask vector is corresponding to a given condition, the generated images are simplex without style diversity. StarGAN v2(Choi et al., 2020) uses a variable style code to replace the mask vectors on this basis, and the generated target images of each modality have different styles. DRTI++(Lee et al., 2020) also adds domain code for translation so that any target modality images can be generated by a unified generator.
Although the above model can achieve multi-modality translation, it can not focus on local targets but only on the whole image. (Zhang et al., 2018b) propose that for unsupervised learning, cycle consistency loss will easily lead to local deformation of the image if there are no other constraints. InstaGAN(Mo et al., 2018) proposed to add segmentation labels of local instances as additional input information so that the network will pay more attention to the shape of the local instances in the training process and reduces the deformation. DUNIT(Bhattacharjee et al., 2020) and INIT(Shen et al., 2019) respectively propose to use object detection and segmentation to assist translation. EaGANs(Yu et al., 2019) proposes to integrate edge maps that contain critical textural information to boost synthesis quality. TC-MGAN(Xin et al., 2020) introduces a multi-modality tumor consistency
loss to preserve the critical tumor information in the target-generated images but it can only translate the images from the T2 modality to other MR modalities. TarGAN(Chen et al., 2021a) can focus on the target area by using a segmentation network but it gets dissatisfactory results on brain tumor datasets. While these models can translate images more effectively, they also require more supervised information.
Some of the above methods can only translate images between two given modalities, and some require paired and labeled data for training, which is not completely consistent with the practical application scenarios that most data are unpaired. We propose S3TAGAN to learn an end-to-end mapping from an arbitrary source modality to the given target modality, which can focus on the local tumor areas and translate better by using unpaired images with few paired and labeled ones.
3 METHOD
In this section, we first describe our framework and the pipeline of our approach, then we define the training objective functions.
3.1 FRAMEWORK AND PIPELINE
Given an image Is from the source modality, we first put it into the segmentation network to get three pseudo labels, whole tumor, tumor core and enhancing tumor. Then we multiply the source image with three pseudo labels respectively to get the source tumor images Ts which only contains different tumor areas. Given the source modality vector s and an arbitrary target modality vector t, we aim to train a generator that can translate the source whole image Is and the source tumor images Ts to the target whole image It and the target tumor images Tt. The mapping is denoted as: (It, Tt) = G(Is, Ts, s, t). Note that the segmentation network is only required during the training process, only Is,s and t are used during the inference process, which is denoted as: It = G(Is, s, t). The framework of the model is shown in Figure 1.
Generator. The generator is comprised of two encoder-decoder pairs, one for the global branch and the other for the local branch. The global decoder receives the feature encoded by the global encoder and generates the target whole image It while the local decoder receives the features from both the global encoder and the local encoder to generate the target tumor images Tt. The generator translates the target whole image It and its corresponding tumor images T ′t to the reconstructed whole image I ′s and tumor images T ′ s. In this way, a cycle training process is accomplished.
Discriminator. We use two discriminators to distinguish the reality of images and the modality they belong to. The discriminator Dg is responsible for the whole images in the global branch and the discriminator Dl is responsible for the tumor images in the local branch respectively. Segmentation network. Given the image and its corresponding modality vector, the segmentation network generates three pseudo labels of the three kinds of tumors, which are binary masks that represent the foreground and background of the tumors. Then we calculate the tumor images by the dot product of the whole image and three pseudo labels. Taking source image Is and its modality vector s for example, the mapping is denoted as: Ts = Is ∗ S(Is, s). On account of the poor proportion of labeled data, we train the segmentation network by a semi-supervised method proposed in CPS(Chen et al., 2021b). The generated target image It is also inputted into the segmentation network similar to Is, which is denoted as T ′t = It ∗ S(It, t).
3.2 TRAINING OBJECTIVE FUNCTIONS
Adversarial loss. Adversarial loss can make the images generated by the generator more realistic to confuse the discriminator. The traditional adversarial losses for the global branch and the local branch are defined as follows:
Ladvg = EIs [logD src g (Is)] + EIt [log(1−Dsrcg (It))], (1)
Ladvl = ETs [logD src l (Ts)] + ETt [log(1−Dsrcl (Tt))]. (2)
Taking the global branch for example to explain, Dsrcg (Is) represents the probability that the discriminator considers the images Is as real, and Dsrcg (It) represents the probability that the discriminator considers the generated images as real. In order to correctly distinguish the reality of the
images, the discriminator aims to minimize Dsrcg (It) and D src l (Tt), while the generator, on the other hand, aims to maximize these terms to confuse the discriminator.
Since the traditional adversarial loss may lead to unstable adversarial learning, WGAN(Arjovsky et al., 2017) proposes a new adversarial loss, which solves the problem of instability of the training process in GAN, reduces the problem of mode collapse to a large extent, and ensures the diversity of generated samples. WGAN-GP(Gulrajani et al., 2017) proposes to use a gradient penalty strategy instead of the weight clipping strategy in WGAN, which makes the training process in GAN more stable and improves the quality of generated images. The final adversarial losses are shown as follows:
LadvDg =λgpEα,s,Is [(∥∇D src g (αIs + (1− α)It)∥2 − 1)2] − EIs [Dsrcg (Is)], (3) LadvDl =λgpEα,s,Ts [(∥∇D src l (αTs + (1− α)Tt)∥2 − 1)2]
− ETs [Dsrcl (Ts)], (4) LadvGg = EIs,s[D src g (It)], (5)
LadvGl = ETs,s[D src l (Tt)], (6)
where λgp is set as 1, α is set as a random number whose range is between [0, 1] and subject to a uniform distribution in this paper.
Modality classification loss. Given an image and its target modality vector, we hope the generator can generate images that are as close to the target modality as possible. Similar to StarGAN, the discriminator aims to judge the modality they belong to. The difference is we add an extra discriminator for the local branch. For real images, we define the modality classification loss as follows:
Lr clsDg = EIs [−logD cls g (s|Is)], (7)
Lr clsDl = ETs [−logD cls l (s|Ts)], (8)
where the term Dclsg (s|Is) represents a probability distribution over the modality vector for the whole images and the term Dclsl (s|Ts) represents the one for the tumor images. Similarly, we define the modality classification loss for fake images as follows:
Lf clsDg = EIs,s[−logD cls g (t|It)], (9)
Lf clsDl = ETs,s[−logD cls l (t|Tt)]. (10)
Local consistency loss. Tt represents the generated tumor images and T ′t represents the tumor areas of the generated whole image It. Since we hope the segmentation network can better guide the translation of the global branch, we constrain the similarity of these two to alleviate the problem of distortion in brain tumor image translation. Similarly, the reconstructed tumor image T ′s and the source tumor images Ts are supposed to be similar. We propose a local consistency loss as an extra constraint to improve the translation effect, which is defined as follows:
Llocal = E([∥Tt − T ′t∥1]) + E([∥Ts − T ′s∥1]). (11) Reconstruct loss. The model can translate the source image Is to the image It of any modality. However, this does not guarantee that the generated image It just simply changes the image style information and still contains all the content information of the source image Is. To solve this problem, we input It into the translation network for cycle translation to obtain the reconstructed image I ′s. If I ′ s is consistent with Is, the content information is not missed during translation. The reconstruct loss is defined as follows:
Lrec = E[∥Is − I ′s∥1]. (12)
Identity mapping loss. Given an image of arbitrary modality, if the target modality happens to be its source modality, we denote the mapping as Iidt, Tidt = G(Is, Ts, s, s). We hope the generated images to be as consistent as possible with the source images. We use identity mapping loss to enforce the generated image not to lose origin information, which is defined as follows:
Lidt = E[∥Is − Iidt∥1 + E[∥Ts − Tidt∥1]. (13)
Semi-supervised loss. For images that are paired and labeled, we further use ground truth to constrain the generated tumor images to alleviate the problem of local image deformation. We defined the semi-supervised loss as follows:
Lss = E[∥Tt −GT∥1], (14) where GT represents the ground truth of the target tumor images.
Total loss.Combining all the losses mentioned above, we finally defined the objective function as follows:
LD =λ adv Dg L adv Dg + λ adv Dl LadvDl + λ cls DgL r cls Dg
+ λclsDlL r cls Dl , (15)
LG =λ adv Gg L adv Gg + λ adv Gl LadvGl + λ cls DgL fcls Dg
+ λclsDlL fcls Dl + λrecLrec + λlocalLlocal
+ λidtLidt + λssLss, (16) where λadvDg ,λ adv Dl ,λclsDg ,λ cls Dl
,λrec, λlocal, λidt,λss are hyper-parameters to balance to losses. We set λadvDg ,λ adv Dl ,λclsDg ,λ cls Dl to be 1.0 and λrec, λlocal, λidt,λss to be 10.0 in this paper.
4 EXPERIMENTS
4.1 SETTINGS
Datasets. We conduct all our experiments on BRATS2020(Menze et al., 2014)(Bakas et al., 2017)(Bakas et al., 2018) dataset. BRATS2020 provides brain tumor images of four modalities: T1-weighted (T1), T1-with-contrast-enhanced (T1ce), T2-weighted (T2) and T2-fluid-attenuated inversion recovery (Flair). Three kinds of tumors which are named Whole Tumor(WT), Tumor Core(TC) and Enhancing Tumor(ET), are labeled for segmentation. We use 150 patients’ images as the training samples and 20 percent of them are treated as labeled and paired ones. We resize the images to 128*128. Details are shown in the supplementary material.
Evaluation metrics. For the translation task, we use structural similarity index measure(SSIM), peak-signal-noise ratio(PSNR) and learned perceptual image patch similarity(LPIPS)(Zhang et al., 2018a) to measure the similarity between the generated images and ground truth. For the downstream segmentation task, we use DICE to measure the integrity of the predicted pseudo labels which are generated by nnU-Net(Isensee et al., 2021). That’s because nnU-Net is an acknowledged method that achieves state-of-the-art performances for medical image segmentation.
Baselines. We compare our translation results with StarGAN(Choi et al., 2018), DRIT++(Lee et al., 2018), Targan(Chen et al., 2021a) and ReMIC(Shen et al., 2020). StarGAN proposes to use a unified model to translate images to arbitrary modalities. DRIT++ disentangles an image to the content code and the attribute code during the training process and generates images by using the content code extracted from the input images and the attribute code sampled from the standard normal distribution. TarGAN alleviates the problem of image deformation on the target area by utilizing an extra shape controller. ReMIC generates images by using multi-modality paired images. Note that we implement a semi-supervised ReMIC for comparison.
Implementation details. We implement PatchGAN(Isola et al., 2017) as the backbone for both the global discriminator and local discriminator. And we use U-net as the backbone for the generator and the segmentation network. We train our model for 100 epochs with a learning rate of 10−4 for the generator and both the discriminators for the first 50 epochs and linearly decay the learning rate to 10−6 at the final epoch. Adam(Kingma & Ba, 2014) optimizer is used with momentum parameters β1 = 0.9 and β2 = 0.999. We also adopt data augmentation and normalization for the training samples. Details are shown in the supplementary material. All the experiments are conducted on PyTorch with NVIDIA RTX 3090.
4.2 RESULTS
In this section, we demonstrate our translation results compared with other baseline methods. Then we verify the effectiveness of the generated images via the downstream segmentation tasks.
4.2.1 TRANSLATION RESULTS
Qualitative evaluation. Figure 2 shows the qualitative results of our model and the other baselines. StarGAN and DRIT++ generate images with checkerboard artifacts in some cases, while ReMIC and TarGAN may lead to blur or deformation in the tumor areas. Our method generates images with clearer textures and more structural information.
Quantitative evaluation. We use SSIM, PSNR and LPIPS to measure the similarity between the generated images and ground truth.
As shown in Table 1, our method gets higher SSIM and PSNR than the other baselines. The value is the average of all the cases for the mapping of any source modality to an arbitrary target modality. We also use the smallest rectangle to frame the tumor areas for every generated image and calculate the SSIM and PSNR of the framed images with their corresponding ground truth, which is denoted as local SSIM and local PSNR. These two metrics represent the translation effectiveness in the tumor areas. S3TAGAN get higher scores of them, which means that our method preserves more information in the tumor areas. LPIPS is also a metric to measure perceptual similarity. The
lower value of this metric means higher similarity which represents that our model achieves better translation effectiveness.
4.2.2 DOWNSTREAM SEGMENTATION RESULTS
Given an image from an arbitrary modality, we translate it to the other three modalities by our model and all the baselines respectively. Then we put the fully multi-modality images generated by the above methods into nnU-Net to compare their segmentation effectiveness. We use DICE of whole tumor(WT), tumor core(TC) and enhancing tumor(ET) to measure the results. As shown in Table 2, our method achieves better performance than all the baselines, which also represents that we generate more accurate information on the tumor areas.
4.3 EFFECTIVENESS OF LOCAL BRANCH
In this section, we conduct an ablation study to validate the effectiveness of the local branch which is guided by the segmentation network in our proposed S3TAGAN . We replace the predicted pseudo labels with the following three situations: (a)ground truth labels. (b) full-zeros maps. (c) random maps that each value is either zero or one. As shown in Table 3, the performance of S3TAGAN with labels is the upper bound in theory, which represents the best guidance for local tumor translation. Note that the performance of segmentation guidance is close to this situation, which demonstrates the effectiveness of our model. While S3TAGAN with zeros maps is the lower bound in theory, which represents no segmentation-guided learning. S3TAGAN with random maps represents learning with guidance for random areas but not the tumor areas. Translation effectiveness is improved slightly in this situation. The results show the robustness of our model.
4.4 RATIO OF PAIRED AND LABELED DATA
In order to test the effect of the ratio of paired and labeled data for semi-supervised learning on our model, we adjust the ratio to 10 percent and 100 percent. As shown in Table 4, more paired and labeled data for supervision can improve translation effectiveness. Note that the effectiveness of 20 percent supervision which is our default setting is close to 100 percent supervision.
5 CONCLUSION
In this paper, we propose a semi-supervised segmentation-guided method called S3TAGAN to translate brain tumor images, which learns a mapping between any two modalities. We use unpaired images for training with only few paired and labeled ones, which is in agreement with the practical situation. With the guidance of the segmentation network, the local branch focuses on the brain tumor areas and alleviates the problem of deformation in the tumor areas, which benefits the quality of both the generated whole images and tumor images. Experiments demonstrate that our model achieves better translation effectiveness with strong robustness. The results of the downstream segmentation task also verify the effectiveness of our model. | 1. What is the focus and contribution of the paper on MRI image translation?
2. What are the strengths and weaknesses of the proposed S3TAGAN model, particularly regarding its architecture and performance metrics?
3. Do you have any concerns or suggestions regarding the model's utilization of tumor segmentation information?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any specific areas where the paper could benefit from additional explanation or simplification? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The paper proposes S3TAGAN, which translates MRI images with tumors to another modality. It utilizes tumor segmentation for better results. Other ideas of the model follow StarGAN, which adds classification tasks to the discriminator for specification to a target modality.
Strengths And Weaknesses
The output comparisons suggest that it is better than the rest of the models compared. Evaluation metrics are also very promising.
The Dice Loss however does not seem to be a good metric. Though nnU-Net does have better results, it cannot be an evidence of the model’s performance. Especially since segmentation information is actively used in this model, the resulting metric can be biased positively towards the proposed model. It would make more sense to have images of the segmented tumors within the model.
It is complicated to understand the model architecture figure without reading the paper. It could make the generator part more abstract, and add legends to explain some blocks. That way it would have enough space to add where the semi-supervised part is, along with detailed information of the discriminator’s job.
While it could be assumed, they should still indicate where the test set is from.
It would be beneficial to see how each model performs on different datasets. The code for the model is not available, which is hard to reproduce as not all parameters of the model can be explained. Minor changes could be made for the writing to have better flow.
Some terms are overcomplicated. For example, the model uses segmentation results for better translation of the tumor region. They repeatedly refer to it as taking the dot product, while it can be just mentioned once that the image is masked with the segmentation output. The paper even further writes a convolution equation, which is an unnecessary expansion of math terms.
The compared models are explained again in the baseline section. The information should be in related works.
Clarity, Quality, Novelty And Reproducibility
The paper’s method is not clear at first due to the flow of the writing and unnecessary equations. Nevertheless, it is understandable in the end.
Could use some minor fixes to upgrade the quality.
The paper has adequate novelty.
It is not missing any crucial information, but deep learning models are not easy to reproduce without the code. Without code it is not only hard to replicate the result but also decreases the reliability of the proposed model.
Code is not provided. Not trivial to reproduce. |
ICLR | Title
Semi-Supervised Segmentation-Guided Tumor-Aware Generative Adversarial Network for Multi-Modality Brain Tumor Translation
Abstract
Multi-modality brain tumor images are widely used for clinical diagnosis since they can provide complementary information. Yet, due to considerations such as time, cost, and artifacts, it is difficult to get fully paired multi-modality images. Therefore, most of the brain tumor images are modality-missing in practice and only a few are labeled, due to a large amount of expert knowledge required. To tackle this problem, multi-modality brain tumor image translation has been extensively studied. However, existing works often lead to tumor deformation or distortion because they only focus on the whole image. In this paper, we propose a semi-supervised segmentation-guided tumor-aware generative adversarial network called STAGAN , which utilizes unpaired brain tumor images with few paired and labeled ones to learn an end-to-end mapping from source modality to target modality. Specifically, we train a semi-supervised segmentation network to get pseudo labels, which aims to help the model focus on the local brain tumor areas. The model can synthesize more realistic images using pseudo tumor labels as additional information to help the global translation. Experiments show that our model achieves competitive results on both quantitative and qualitative evaluations. We also verify the effectiveness of the generated images via the downstream segmentation tasks.
N/A
Multi-modality brain tumor images are widely used for clinical diagnosis since they can provide complementary information. Yet, due to considerations such as time, cost, and artifacts, it is difficult to get fully paired multi-modality images. Therefore, most of the brain tumor images are modality-missing in practice and only a few are labeled, due to a large amount of expert knowledge required. To tackle this problem, multi-modality brain tumor image translation has been extensively studied. However, existing works often lead to tumor deformation or distortion because they only focus on the whole image. In this paper, we propose a semi-supervised segmentation-guided tumor-aware generative adversarial network called S3TAGAN , which utilizes unpaired brain tumor images with few paired and labeled ones to learn an end-to-end mapping from source modality to target modality. Specifically, we train a semi-supervised segmentation network to get pseudo labels, which aims to help the model focus on the local brain tumor areas. The model can synthesize more realistic images using pseudo tumor labels as additional information to help the global translation. Experiments show that our model achieves competitive results on both quantitative and qualitative evaluations. We also verify the effectiveness of the generated images via the downstream segmentation tasks.
1 INTRODUCTION
Multi-modality medical images are widely used in various tasks such as clinical detection. There are different kinds of imaging technologies in practice. For example, magnetic resonance imaging (MRI) is a common and noninvasive imaging technique. With the help of an additional magnetic field, MRI can determine the nucleus types of a certain part of the human body and then generate structural images with high resolution.
MRI is further divided into several modalities, such as T1-weighted (T1), T1-with-contrastenhanced (T1ce), T2-weighted (T2), and T2-fluid-attenuated inversion recovery (Flair). Each modality of imaging can show complement lesion information from different angles. In Flair images, the cerebrospinal fluid shows hypointense signals while the lesions containing water appear as hyperintense signals. In T1 images, the cerebrospinal fluid is hypointense which tends to be black, while the gray matter is gray and the white matter is bright. Therefore, T1 images can present the anatomical structure, which is convenient for diagnoses. T1ce can show the structures and the edges of the tumors, which is convenient to observe the morphology of different types of tumors. T2 can better display the lesions because the brightness in the edema site is higher. Obviously, fully paired multi-modality images help doctors to make diagnoses more accurately.
The benefits of using multi-modality images to assist medical analysis have been widely recognized. However, due to the consideration of time, cost, artifacts, and other practical factors, physicians often get some of the modalities for examination in practice. In other words, most of the images are modality-missing, which has an adverse impact on the accuracy of physicians’ diagnoses. If we can generate the corresponding missing modalities of given images by image translation, physicians can get more comprehensive information for diagnoses.
Many existing methods for multi-modality image translation based on deep learning have achieved good results in natural images. However, when applied to medical images, especially brain tumor images, the results are often unsatisfactory. Compared with two-dimensional natural images, threedimensional medical images have more structural information. Moreover, due to the privacy of patients, a large number of medical images collected by different institutions are private, which increases the difficulty of model training. In addition, the hierarchical structures of brain tumors are complex and irregular, which leads to blur or deformation in image translation. Therefore, the translation of brain tumor images has always been a challenge in the field of medical image translation.
To solve the problem of local distortion or blur in brain tumor image translation, we propose to use pseudo-labels generated by a segmentation network to guide the translation. The model contains a global branch and a local branch. For a given source image of arbitrary modality, we first put it into the segmentation network to get pseudo labels of three kinds of tumors, whole tumor, tumor core and enhancing tumor. Then the source image is inputted into the global branch and the dot product of it and the three pseudo labels are inputted into the local branch. In this way, the translation network can focus on the different parts of tumors. Since the training data are mostly unpaired and only few of them are labeled and paired, we train the segmentation network by the semi-supervised method proposed in CPS(Chen et al., 2021b). For paired images with Ground Truth, we use L1 loss for further constraint. The segmentation network and the translation network are trained at the same time to promote each other. Furthermore, in order to make our model applicable to images of arbitrary modality, similar to StarGAN(Choi et al., 2018), the discriminator tries to not only distinguish whether the images are real or fake, but also judge the modality which they belong to. In this way, we do not need to train a segmentation network for each modality, but only need to train a unified model to solve all cases.
In this way, we achieve an end-to-end translation, which means that given brain tumor images of arbitrary modality with both the source and target modality vectors, the model can directly output the final target images without any other manual intervention. We name our model as Semi-Supervised Segmentation-guided Tumor-Aware Generative Adversarial Network(S3TAGAN ).
In summary, the main contributions of this paper are as follows:
• We propose a Semi-Supervised Segmentation-Guided Tumor-Aware Generative Adversarial Network, named S3TAGAN , which is guided by different parts of tumors and improves the translation effectiveness using unpaired brain tumor images with few paired and labeled ones. We also propose a local consistency loss to preserve the anatomical structure of the tumors.
• We show qualitative and quantitative results in the multi-modality translation task on the BRATS 2020 dataset. Our model achieves better results compared with the state-of-the-art methods. We also verify the quality of the generated images through downstream segmentation tasks.
2 RELATED WORKS
Cross-modality image translation has been intensively studied in recent years. For instance, Pix2pix(Isola et al., 2017) provides a solution to generate images from the given source modality to the given target modality based on cGAN(Mirza & Osindero, 2014). However, it requires paired data for training, which is hard to realize. Therefore, how to achieve unsupervised image translation by utilizing unpaired data has attracted the interest of many researchers. CycleGAN(Zhu et al., 2017) and DiscoGAN(Kim et al., 2017) propose a cycle consistency loss, which attempts to preserve the crucial information of the images. By constraining the reconstructed image and the source image, the model is available to translate images between the two given modalities with unpaired data. UNIT(Liu et al., 2017) believes that the essence of image translation tasks lies in calculating the joint distribution by utilizing the edge distribution of images in two known domains. Since there may be infinite joint distributions corresponding to two marginal distributions, some additional assumptions must be added. UNIT assumes that the two modalities share the same latent space, and proposes to combine VAE and GAN to form a more robust generative model. The encoder maps the images of different domains to the same distribution to obtain the latent code, and then the decoder maps the
latent code back to the image domain. However, the images generated by the above models do not have style diversity. For a given image, the generated target modal image is unique. In order to solve this problem, MUNIT(Huang et al., 2018) and DRIT(Lee et al., 2018) disentangle the latent code into the content code which is shared by different modalities and the style code which is unique for different modalities and restricted to normal distribution. In this way, the style code can be obtained by style encoder or sampling, so that the image of the target modalities can be various.
However, the above models can only translate images between two modalities. If we want to translate images between n modalities, we need to train the model for n(n − 1)/2 times. In order to perform in a unified model to translate multi-modality images, StarGAN(Choi et al., 2018) proposed a single generator to learn the mapping between any two given modalities. The source images and mask vectors are inputted to the generator which then outputs the generated target images. The discriminator needs to not only distinguish whether the images are real or false, but also classified the domain they belong to. Since every mask vector is corresponding to a given condition, the generated images are simplex without style diversity. StarGAN v2(Choi et al., 2020) uses a variable style code to replace the mask vectors on this basis, and the generated target images of each modality have different styles. DRTI++(Lee et al., 2020) also adds domain code for translation so that any target modality images can be generated by a unified generator.
Although the above model can achieve multi-modality translation, it can not focus on local targets but only on the whole image. (Zhang et al., 2018b) propose that for unsupervised learning, cycle consistency loss will easily lead to local deformation of the image if there are no other constraints. InstaGAN(Mo et al., 2018) proposed to add segmentation labels of local instances as additional input information so that the network will pay more attention to the shape of the local instances in the training process and reduces the deformation. DUNIT(Bhattacharjee et al., 2020) and INIT(Shen et al., 2019) respectively propose to use object detection and segmentation to assist translation. EaGANs(Yu et al., 2019) proposes to integrate edge maps that contain critical textural information to boost synthesis quality. TC-MGAN(Xin et al., 2020) introduces a multi-modality tumor consistency
loss to preserve the critical tumor information in the target-generated images but it can only translate the images from the T2 modality to other MR modalities. TarGAN(Chen et al., 2021a) can focus on the target area by using a segmentation network but it gets dissatisfactory results on brain tumor datasets. While these models can translate images more effectively, they also require more supervised information.
Some of the above methods can only translate images between two given modalities, and some require paired and labeled data for training, which is not completely consistent with the practical application scenarios that most data are unpaired. We propose S3TAGAN to learn an end-to-end mapping from an arbitrary source modality to the given target modality, which can focus on the local tumor areas and translate better by using unpaired images with few paired and labeled ones.
3 METHOD
In this section, we first describe our framework and the pipeline of our approach, then we define the training objective functions.
3.1 FRAMEWORK AND PIPELINE
Given an image Is from the source modality, we first put it into the segmentation network to get three pseudo labels, whole tumor, tumor core and enhancing tumor. Then we multiply the source image with three pseudo labels respectively to get the source tumor images Ts which only contains different tumor areas. Given the source modality vector s and an arbitrary target modality vector t, we aim to train a generator that can translate the source whole image Is and the source tumor images Ts to the target whole image It and the target tumor images Tt. The mapping is denoted as: (It, Tt) = G(Is, Ts, s, t). Note that the segmentation network is only required during the training process, only Is,s and t are used during the inference process, which is denoted as: It = G(Is, s, t). The framework of the model is shown in Figure 1.
Generator. The generator is comprised of two encoder-decoder pairs, one for the global branch and the other for the local branch. The global decoder receives the feature encoded by the global encoder and generates the target whole image It while the local decoder receives the features from both the global encoder and the local encoder to generate the target tumor images Tt. The generator translates the target whole image It and its corresponding tumor images T ′t to the reconstructed whole image I ′s and tumor images T ′ s. In this way, a cycle training process is accomplished.
Discriminator. We use two discriminators to distinguish the reality of images and the modality they belong to. The discriminator Dg is responsible for the whole images in the global branch and the discriminator Dl is responsible for the tumor images in the local branch respectively. Segmentation network. Given the image and its corresponding modality vector, the segmentation network generates three pseudo labels of the three kinds of tumors, which are binary masks that represent the foreground and background of the tumors. Then we calculate the tumor images by the dot product of the whole image and three pseudo labels. Taking source image Is and its modality vector s for example, the mapping is denoted as: Ts = Is ∗ S(Is, s). On account of the poor proportion of labeled data, we train the segmentation network by a semi-supervised method proposed in CPS(Chen et al., 2021b). The generated target image It is also inputted into the segmentation network similar to Is, which is denoted as T ′t = It ∗ S(It, t).
3.2 TRAINING OBJECTIVE FUNCTIONS
Adversarial loss. Adversarial loss can make the images generated by the generator more realistic to confuse the discriminator. The traditional adversarial losses for the global branch and the local branch are defined as follows:
Ladvg = EIs [logD src g (Is)] + EIt [log(1−Dsrcg (It))], (1)
Ladvl = ETs [logD src l (Ts)] + ETt [log(1−Dsrcl (Tt))]. (2)
Taking the global branch for example to explain, Dsrcg (Is) represents the probability that the discriminator considers the images Is as real, and Dsrcg (It) represents the probability that the discriminator considers the generated images as real. In order to correctly distinguish the reality of the
images, the discriminator aims to minimize Dsrcg (It) and D src l (Tt), while the generator, on the other hand, aims to maximize these terms to confuse the discriminator.
Since the traditional adversarial loss may lead to unstable adversarial learning, WGAN(Arjovsky et al., 2017) proposes a new adversarial loss, which solves the problem of instability of the training process in GAN, reduces the problem of mode collapse to a large extent, and ensures the diversity of generated samples. WGAN-GP(Gulrajani et al., 2017) proposes to use a gradient penalty strategy instead of the weight clipping strategy in WGAN, which makes the training process in GAN more stable and improves the quality of generated images. The final adversarial losses are shown as follows:
LadvDg =λgpEα,s,Is [(∥∇D src g (αIs + (1− α)It)∥2 − 1)2] − EIs [Dsrcg (Is)], (3) LadvDl =λgpEα,s,Ts [(∥∇D src l (αTs + (1− α)Tt)∥2 − 1)2]
− ETs [Dsrcl (Ts)], (4) LadvGg = EIs,s[D src g (It)], (5)
LadvGl = ETs,s[D src l (Tt)], (6)
where λgp is set as 1, α is set as a random number whose range is between [0, 1] and subject to a uniform distribution in this paper.
Modality classification loss. Given an image and its target modality vector, we hope the generator can generate images that are as close to the target modality as possible. Similar to StarGAN, the discriminator aims to judge the modality they belong to. The difference is we add an extra discriminator for the local branch. For real images, we define the modality classification loss as follows:
Lr clsDg = EIs [−logD cls g (s|Is)], (7)
Lr clsDl = ETs [−logD cls l (s|Ts)], (8)
where the term Dclsg (s|Is) represents a probability distribution over the modality vector for the whole images and the term Dclsl (s|Ts) represents the one for the tumor images. Similarly, we define the modality classification loss for fake images as follows:
Lf clsDg = EIs,s[−logD cls g (t|It)], (9)
Lf clsDl = ETs,s[−logD cls l (t|Tt)]. (10)
Local consistency loss. Tt represents the generated tumor images and T ′t represents the tumor areas of the generated whole image It. Since we hope the segmentation network can better guide the translation of the global branch, we constrain the similarity of these two to alleviate the problem of distortion in brain tumor image translation. Similarly, the reconstructed tumor image T ′s and the source tumor images Ts are supposed to be similar. We propose a local consistency loss as an extra constraint to improve the translation effect, which is defined as follows:
Llocal = E([∥Tt − T ′t∥1]) + E([∥Ts − T ′s∥1]). (11) Reconstruct loss. The model can translate the source image Is to the image It of any modality. However, this does not guarantee that the generated image It just simply changes the image style information and still contains all the content information of the source image Is. To solve this problem, we input It into the translation network for cycle translation to obtain the reconstructed image I ′s. If I ′ s is consistent with Is, the content information is not missed during translation. The reconstruct loss is defined as follows:
Lrec = E[∥Is − I ′s∥1]. (12)
Identity mapping loss. Given an image of arbitrary modality, if the target modality happens to be its source modality, we denote the mapping as Iidt, Tidt = G(Is, Ts, s, s). We hope the generated images to be as consistent as possible with the source images. We use identity mapping loss to enforce the generated image not to lose origin information, which is defined as follows:
Lidt = E[∥Is − Iidt∥1 + E[∥Ts − Tidt∥1]. (13)
Semi-supervised loss. For images that are paired and labeled, we further use ground truth to constrain the generated tumor images to alleviate the problem of local image deformation. We defined the semi-supervised loss as follows:
Lss = E[∥Tt −GT∥1], (14) where GT represents the ground truth of the target tumor images.
Total loss.Combining all the losses mentioned above, we finally defined the objective function as follows:
LD =λ adv Dg L adv Dg + λ adv Dl LadvDl + λ cls DgL r cls Dg
+ λclsDlL r cls Dl , (15)
LG =λ adv Gg L adv Gg + λ adv Gl LadvGl + λ cls DgL fcls Dg
+ λclsDlL fcls Dl + λrecLrec + λlocalLlocal
+ λidtLidt + λssLss, (16) where λadvDg ,λ adv Dl ,λclsDg ,λ cls Dl
,λrec, λlocal, λidt,λss are hyper-parameters to balance to losses. We set λadvDg ,λ adv Dl ,λclsDg ,λ cls Dl to be 1.0 and λrec, λlocal, λidt,λss to be 10.0 in this paper.
4 EXPERIMENTS
4.1 SETTINGS
Datasets. We conduct all our experiments on BRATS2020(Menze et al., 2014)(Bakas et al., 2017)(Bakas et al., 2018) dataset. BRATS2020 provides brain tumor images of four modalities: T1-weighted (T1), T1-with-contrast-enhanced (T1ce), T2-weighted (T2) and T2-fluid-attenuated inversion recovery (Flair). Three kinds of tumors which are named Whole Tumor(WT), Tumor Core(TC) and Enhancing Tumor(ET), are labeled for segmentation. We use 150 patients’ images as the training samples and 20 percent of them are treated as labeled and paired ones. We resize the images to 128*128. Details are shown in the supplementary material.
Evaluation metrics. For the translation task, we use structural similarity index measure(SSIM), peak-signal-noise ratio(PSNR) and learned perceptual image patch similarity(LPIPS)(Zhang et al., 2018a) to measure the similarity between the generated images and ground truth. For the downstream segmentation task, we use DICE to measure the integrity of the predicted pseudo labels which are generated by nnU-Net(Isensee et al., 2021). That’s because nnU-Net is an acknowledged method that achieves state-of-the-art performances for medical image segmentation.
Baselines. We compare our translation results with StarGAN(Choi et al., 2018), DRIT++(Lee et al., 2018), Targan(Chen et al., 2021a) and ReMIC(Shen et al., 2020). StarGAN proposes to use a unified model to translate images to arbitrary modalities. DRIT++ disentangles an image to the content code and the attribute code during the training process and generates images by using the content code extracted from the input images and the attribute code sampled from the standard normal distribution. TarGAN alleviates the problem of image deformation on the target area by utilizing an extra shape controller. ReMIC generates images by using multi-modality paired images. Note that we implement a semi-supervised ReMIC for comparison.
Implementation details. We implement PatchGAN(Isola et al., 2017) as the backbone for both the global discriminator and local discriminator. And we use U-net as the backbone for the generator and the segmentation network. We train our model for 100 epochs with a learning rate of 10−4 for the generator and both the discriminators for the first 50 epochs and linearly decay the learning rate to 10−6 at the final epoch. Adam(Kingma & Ba, 2014) optimizer is used with momentum parameters β1 = 0.9 and β2 = 0.999. We also adopt data augmentation and normalization for the training samples. Details are shown in the supplementary material. All the experiments are conducted on PyTorch with NVIDIA RTX 3090.
4.2 RESULTS
In this section, we demonstrate our translation results compared with other baseline methods. Then we verify the effectiveness of the generated images via the downstream segmentation tasks.
4.2.1 TRANSLATION RESULTS
Qualitative evaluation. Figure 2 shows the qualitative results of our model and the other baselines. StarGAN and DRIT++ generate images with checkerboard artifacts in some cases, while ReMIC and TarGAN may lead to blur or deformation in the tumor areas. Our method generates images with clearer textures and more structural information.
Quantitative evaluation. We use SSIM, PSNR and LPIPS to measure the similarity between the generated images and ground truth.
As shown in Table 1, our method gets higher SSIM and PSNR than the other baselines. The value is the average of all the cases for the mapping of any source modality to an arbitrary target modality. We also use the smallest rectangle to frame the tumor areas for every generated image and calculate the SSIM and PSNR of the framed images with their corresponding ground truth, which is denoted as local SSIM and local PSNR. These two metrics represent the translation effectiveness in the tumor areas. S3TAGAN get higher scores of them, which means that our method preserves more information in the tumor areas. LPIPS is also a metric to measure perceptual similarity. The
lower value of this metric means higher similarity which represents that our model achieves better translation effectiveness.
4.2.2 DOWNSTREAM SEGMENTATION RESULTS
Given an image from an arbitrary modality, we translate it to the other three modalities by our model and all the baselines respectively. Then we put the fully multi-modality images generated by the above methods into nnU-Net to compare their segmentation effectiveness. We use DICE of whole tumor(WT), tumor core(TC) and enhancing tumor(ET) to measure the results. As shown in Table 2, our method achieves better performance than all the baselines, which also represents that we generate more accurate information on the tumor areas.
4.3 EFFECTIVENESS OF LOCAL BRANCH
In this section, we conduct an ablation study to validate the effectiveness of the local branch which is guided by the segmentation network in our proposed S3TAGAN . We replace the predicted pseudo labels with the following three situations: (a)ground truth labels. (b) full-zeros maps. (c) random maps that each value is either zero or one. As shown in Table 3, the performance of S3TAGAN with labels is the upper bound in theory, which represents the best guidance for local tumor translation. Note that the performance of segmentation guidance is close to this situation, which demonstrates the effectiveness of our model. While S3TAGAN with zeros maps is the lower bound in theory, which represents no segmentation-guided learning. S3TAGAN with random maps represents learning with guidance for random areas but not the tumor areas. Translation effectiveness is improved slightly in this situation. The results show the robustness of our model.
4.4 RATIO OF PAIRED AND LABELED DATA
In order to test the effect of the ratio of paired and labeled data for semi-supervised learning on our model, we adjust the ratio to 10 percent and 100 percent. As shown in Table 4, more paired and labeled data for supervision can improve translation effectiveness. Note that the effectiveness of 20 percent supervision which is our default setting is close to 100 percent supervision.
5 CONCLUSION
In this paper, we propose a semi-supervised segmentation-guided method called S3TAGAN to translate brain tumor images, which learns a mapping between any two modalities. We use unpaired images for training with only few paired and labeled ones, which is in agreement with the practical situation. With the guidance of the segmentation network, the local branch focuses on the brain tumor areas and alleviates the problem of deformation in the tumor areas, which benefits the quality of both the generated whole images and tumor images. Experiments demonstrate that our model achieves better translation effectiveness with strong robustness. The results of the downstream segmentation task also verify the effectiveness of our model. | 1. What is the focus of the paper, and what contribution does it make in the field of medical image-to-image translation?
2. What are the strengths of the proposed approach, particularly in utilizing semi-supervision and tumor segmentation?
3. What are the weaknesses of the paper, especially regarding its experiments and comparisons with other works?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any questions or concerns regarding the paper's methodology, results, or conclusions? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The authors present, S3TAGAN, a method for unpaired medical image-to-image translation. The method utilized semi-supervision and tumor segmentation as part of a multi-term loss function, that includes an adversarial cycle consistency loss, among others. The authors conduct experiments on the BRATS2020 dataset. The evaluated their method using image quality metrics, such as SSIM, PSNR, and LPIPS. They also present some images for qualitative evaluation. They present results that show that S3TAGAN was able to achieve better image quality metrics compared to several other baseline methods. They also show better DICE scores on brain tumor segmentation. Two ablation studies were performed, including the ratio of paired to unpaired data.
Strengths And Weaknesses
Strengths: The paper does address a potentially important task of generating MR sequences that are not present on in all brain MRI protocols across medical centers. At our medical centers, we do see a diversity of MR sequences for routine brain MRIs from outside hospitals.
Weakness:
The paper is poorly written with many grammatical and syntactic errors (page 2, ‘Ground Truth’ should never to capitalized). The authors are encouraged to have a dedicated editor review all papers prior to submission. Errors of this kind distract readers from the content of the paper.
The authors are a bit confused about the dataset. All of the images are from a single modality, namely magnetic resonance imaging. The mode of image acquisition is the same for all the images in BRATS. The authors are trying to translate between MRI sequences, which are different ways to acquire MR images. Multi-modal translation would be translating MRI images to CT images, or vice versa. Any reference to ‘multi-modal’ image translation needs to be removed from the paper. This is confusing to reader who work in clinical medicine.
The qualitative results from Figure 2 show significant semantic errors. T1 > T1CE, no contrast enhancement is actually present in any of the predicted images, including S3TAGAN T1CE > FLAIR, the area of predicted flair is incorrect for most of the models, including S3TAGAN. S3TAGAN just predicted the contrasting enhancing regions is also FLAIR hyperintense, which does not happen in brain tumors. FLAIR > T2, CSF signal is not hyperintense on the predicted T2 images, especially between along the cerebral convexities.
While I understand that the method can work for unpaired data, the authors must compare the predictions to paired images to tell how well the model is performing on the most relevant evaluation task. BRATS contains paired MR sequences that can be used for qualitative comparison.
While quantitative results such as PSRN or SSIM are relevant, the paper is about image translation, not image restoration or denoising. The most relevant quantitative test is whether the translated images are indistinguishable from their ground truth counterparts. The most important analysis has not been performed in the presented results.
Segmentation is also not the most relevant task.
Clarity, Quality, Novelty And Reproducibility
The quality, clarity, and originality of the work are at or below average. |
ICLR | Title
Semi-Supervised Segmentation-Guided Tumor-Aware Generative Adversarial Network for Multi-Modality Brain Tumor Translation
Abstract
Multi-modality brain tumor images are widely used for clinical diagnosis since they can provide complementary information. Yet, due to considerations such as time, cost, and artifacts, it is difficult to get fully paired multi-modality images. Therefore, most of the brain tumor images are modality-missing in practice and only a few are labeled, due to a large amount of expert knowledge required. To tackle this problem, multi-modality brain tumor image translation has been extensively studied. However, existing works often lead to tumor deformation or distortion because they only focus on the whole image. In this paper, we propose a semi-supervised segmentation-guided tumor-aware generative adversarial network called STAGAN , which utilizes unpaired brain tumor images with few paired and labeled ones to learn an end-to-end mapping from source modality to target modality. Specifically, we train a semi-supervised segmentation network to get pseudo labels, which aims to help the model focus on the local brain tumor areas. The model can synthesize more realistic images using pseudo tumor labels as additional information to help the global translation. Experiments show that our model achieves competitive results on both quantitative and qualitative evaluations. We also verify the effectiveness of the generated images via the downstream segmentation tasks.
N/A
Multi-modality brain tumor images are widely used for clinical diagnosis since they can provide complementary information. Yet, due to considerations such as time, cost, and artifacts, it is difficult to get fully paired multi-modality images. Therefore, most of the brain tumor images are modality-missing in practice and only a few are labeled, due to a large amount of expert knowledge required. To tackle this problem, multi-modality brain tumor image translation has been extensively studied. However, existing works often lead to tumor deformation or distortion because they only focus on the whole image. In this paper, we propose a semi-supervised segmentation-guided tumor-aware generative adversarial network called S3TAGAN , which utilizes unpaired brain tumor images with few paired and labeled ones to learn an end-to-end mapping from source modality to target modality. Specifically, we train a semi-supervised segmentation network to get pseudo labels, which aims to help the model focus on the local brain tumor areas. The model can synthesize more realistic images using pseudo tumor labels as additional information to help the global translation. Experiments show that our model achieves competitive results on both quantitative and qualitative evaluations. We also verify the effectiveness of the generated images via the downstream segmentation tasks.
1 INTRODUCTION
Multi-modality medical images are widely used in various tasks such as clinical detection. There are different kinds of imaging technologies in practice. For example, magnetic resonance imaging (MRI) is a common and noninvasive imaging technique. With the help of an additional magnetic field, MRI can determine the nucleus types of a certain part of the human body and then generate structural images with high resolution.
MRI is further divided into several modalities, such as T1-weighted (T1), T1-with-contrastenhanced (T1ce), T2-weighted (T2), and T2-fluid-attenuated inversion recovery (Flair). Each modality of imaging can show complement lesion information from different angles. In Flair images, the cerebrospinal fluid shows hypointense signals while the lesions containing water appear as hyperintense signals. In T1 images, the cerebrospinal fluid is hypointense which tends to be black, while the gray matter is gray and the white matter is bright. Therefore, T1 images can present the anatomical structure, which is convenient for diagnoses. T1ce can show the structures and the edges of the tumors, which is convenient to observe the morphology of different types of tumors. T2 can better display the lesions because the brightness in the edema site is higher. Obviously, fully paired multi-modality images help doctors to make diagnoses more accurately.
The benefits of using multi-modality images to assist medical analysis have been widely recognized. However, due to the consideration of time, cost, artifacts, and other practical factors, physicians often get some of the modalities for examination in practice. In other words, most of the images are modality-missing, which has an adverse impact on the accuracy of physicians’ diagnoses. If we can generate the corresponding missing modalities of given images by image translation, physicians can get more comprehensive information for diagnoses.
Many existing methods for multi-modality image translation based on deep learning have achieved good results in natural images. However, when applied to medical images, especially brain tumor images, the results are often unsatisfactory. Compared with two-dimensional natural images, threedimensional medical images have more structural information. Moreover, due to the privacy of patients, a large number of medical images collected by different institutions are private, which increases the difficulty of model training. In addition, the hierarchical structures of brain tumors are complex and irregular, which leads to blur or deformation in image translation. Therefore, the translation of brain tumor images has always been a challenge in the field of medical image translation.
To solve the problem of local distortion or blur in brain tumor image translation, we propose to use pseudo-labels generated by a segmentation network to guide the translation. The model contains a global branch and a local branch. For a given source image of arbitrary modality, we first put it into the segmentation network to get pseudo labels of three kinds of tumors, whole tumor, tumor core and enhancing tumor. Then the source image is inputted into the global branch and the dot product of it and the three pseudo labels are inputted into the local branch. In this way, the translation network can focus on the different parts of tumors. Since the training data are mostly unpaired and only few of them are labeled and paired, we train the segmentation network by the semi-supervised method proposed in CPS(Chen et al., 2021b). For paired images with Ground Truth, we use L1 loss for further constraint. The segmentation network and the translation network are trained at the same time to promote each other. Furthermore, in order to make our model applicable to images of arbitrary modality, similar to StarGAN(Choi et al., 2018), the discriminator tries to not only distinguish whether the images are real or fake, but also judge the modality which they belong to. In this way, we do not need to train a segmentation network for each modality, but only need to train a unified model to solve all cases.
In this way, we achieve an end-to-end translation, which means that given brain tumor images of arbitrary modality with both the source and target modality vectors, the model can directly output the final target images without any other manual intervention. We name our model as Semi-Supervised Segmentation-guided Tumor-Aware Generative Adversarial Network(S3TAGAN ).
In summary, the main contributions of this paper are as follows:
• We propose a Semi-Supervised Segmentation-Guided Tumor-Aware Generative Adversarial Network, named S3TAGAN , which is guided by different parts of tumors and improves the translation effectiveness using unpaired brain tumor images with few paired and labeled ones. We also propose a local consistency loss to preserve the anatomical structure of the tumors.
• We show qualitative and quantitative results in the multi-modality translation task on the BRATS 2020 dataset. Our model achieves better results compared with the state-of-the-art methods. We also verify the quality of the generated images through downstream segmentation tasks.
2 RELATED WORKS
Cross-modality image translation has been intensively studied in recent years. For instance, Pix2pix(Isola et al., 2017) provides a solution to generate images from the given source modality to the given target modality based on cGAN(Mirza & Osindero, 2014). However, it requires paired data for training, which is hard to realize. Therefore, how to achieve unsupervised image translation by utilizing unpaired data has attracted the interest of many researchers. CycleGAN(Zhu et al., 2017) and DiscoGAN(Kim et al., 2017) propose a cycle consistency loss, which attempts to preserve the crucial information of the images. By constraining the reconstructed image and the source image, the model is available to translate images between the two given modalities with unpaired data. UNIT(Liu et al., 2017) believes that the essence of image translation tasks lies in calculating the joint distribution by utilizing the edge distribution of images in two known domains. Since there may be infinite joint distributions corresponding to two marginal distributions, some additional assumptions must be added. UNIT assumes that the two modalities share the same latent space, and proposes to combine VAE and GAN to form a more robust generative model. The encoder maps the images of different domains to the same distribution to obtain the latent code, and then the decoder maps the
latent code back to the image domain. However, the images generated by the above models do not have style diversity. For a given image, the generated target modal image is unique. In order to solve this problem, MUNIT(Huang et al., 2018) and DRIT(Lee et al., 2018) disentangle the latent code into the content code which is shared by different modalities and the style code which is unique for different modalities and restricted to normal distribution. In this way, the style code can be obtained by style encoder or sampling, so that the image of the target modalities can be various.
However, the above models can only translate images between two modalities. If we want to translate images between n modalities, we need to train the model for n(n − 1)/2 times. In order to perform in a unified model to translate multi-modality images, StarGAN(Choi et al., 2018) proposed a single generator to learn the mapping between any two given modalities. The source images and mask vectors are inputted to the generator which then outputs the generated target images. The discriminator needs to not only distinguish whether the images are real or false, but also classified the domain they belong to. Since every mask vector is corresponding to a given condition, the generated images are simplex without style diversity. StarGAN v2(Choi et al., 2020) uses a variable style code to replace the mask vectors on this basis, and the generated target images of each modality have different styles. DRTI++(Lee et al., 2020) also adds domain code for translation so that any target modality images can be generated by a unified generator.
Although the above model can achieve multi-modality translation, it can not focus on local targets but only on the whole image. (Zhang et al., 2018b) propose that for unsupervised learning, cycle consistency loss will easily lead to local deformation of the image if there are no other constraints. InstaGAN(Mo et al., 2018) proposed to add segmentation labels of local instances as additional input information so that the network will pay more attention to the shape of the local instances in the training process and reduces the deformation. DUNIT(Bhattacharjee et al., 2020) and INIT(Shen et al., 2019) respectively propose to use object detection and segmentation to assist translation. EaGANs(Yu et al., 2019) proposes to integrate edge maps that contain critical textural information to boost synthesis quality. TC-MGAN(Xin et al., 2020) introduces a multi-modality tumor consistency
loss to preserve the critical tumor information in the target-generated images but it can only translate the images from the T2 modality to other MR modalities. TarGAN(Chen et al., 2021a) can focus on the target area by using a segmentation network but it gets dissatisfactory results on brain tumor datasets. While these models can translate images more effectively, they also require more supervised information.
Some of the above methods can only translate images between two given modalities, and some require paired and labeled data for training, which is not completely consistent with the practical application scenarios that most data are unpaired. We propose S3TAGAN to learn an end-to-end mapping from an arbitrary source modality to the given target modality, which can focus on the local tumor areas and translate better by using unpaired images with few paired and labeled ones.
3 METHOD
In this section, we first describe our framework and the pipeline of our approach, then we define the training objective functions.
3.1 FRAMEWORK AND PIPELINE
Given an image Is from the source modality, we first put it into the segmentation network to get three pseudo labels, whole tumor, tumor core and enhancing tumor. Then we multiply the source image with three pseudo labels respectively to get the source tumor images Ts which only contains different tumor areas. Given the source modality vector s and an arbitrary target modality vector t, we aim to train a generator that can translate the source whole image Is and the source tumor images Ts to the target whole image It and the target tumor images Tt. The mapping is denoted as: (It, Tt) = G(Is, Ts, s, t). Note that the segmentation network is only required during the training process, only Is,s and t are used during the inference process, which is denoted as: It = G(Is, s, t). The framework of the model is shown in Figure 1.
Generator. The generator is comprised of two encoder-decoder pairs, one for the global branch and the other for the local branch. The global decoder receives the feature encoded by the global encoder and generates the target whole image It while the local decoder receives the features from both the global encoder and the local encoder to generate the target tumor images Tt. The generator translates the target whole image It and its corresponding tumor images T ′t to the reconstructed whole image I ′s and tumor images T ′ s. In this way, a cycle training process is accomplished.
Discriminator. We use two discriminators to distinguish the reality of images and the modality they belong to. The discriminator Dg is responsible for the whole images in the global branch and the discriminator Dl is responsible for the tumor images in the local branch respectively. Segmentation network. Given the image and its corresponding modality vector, the segmentation network generates three pseudo labels of the three kinds of tumors, which are binary masks that represent the foreground and background of the tumors. Then we calculate the tumor images by the dot product of the whole image and three pseudo labels. Taking source image Is and its modality vector s for example, the mapping is denoted as: Ts = Is ∗ S(Is, s). On account of the poor proportion of labeled data, we train the segmentation network by a semi-supervised method proposed in CPS(Chen et al., 2021b). The generated target image It is also inputted into the segmentation network similar to Is, which is denoted as T ′t = It ∗ S(It, t).
3.2 TRAINING OBJECTIVE FUNCTIONS
Adversarial loss. Adversarial loss can make the images generated by the generator more realistic to confuse the discriminator. The traditional adversarial losses for the global branch and the local branch are defined as follows:
Ladvg = EIs [logD src g (Is)] + EIt [log(1−Dsrcg (It))], (1)
Ladvl = ETs [logD src l (Ts)] + ETt [log(1−Dsrcl (Tt))]. (2)
Taking the global branch for example to explain, Dsrcg (Is) represents the probability that the discriminator considers the images Is as real, and Dsrcg (It) represents the probability that the discriminator considers the generated images as real. In order to correctly distinguish the reality of the
images, the discriminator aims to minimize Dsrcg (It) and D src l (Tt), while the generator, on the other hand, aims to maximize these terms to confuse the discriminator.
Since the traditional adversarial loss may lead to unstable adversarial learning, WGAN(Arjovsky et al., 2017) proposes a new adversarial loss, which solves the problem of instability of the training process in GAN, reduces the problem of mode collapse to a large extent, and ensures the diversity of generated samples. WGAN-GP(Gulrajani et al., 2017) proposes to use a gradient penalty strategy instead of the weight clipping strategy in WGAN, which makes the training process in GAN more stable and improves the quality of generated images. The final adversarial losses are shown as follows:
LadvDg =λgpEα,s,Is [(∥∇D src g (αIs + (1− α)It)∥2 − 1)2] − EIs [Dsrcg (Is)], (3) LadvDl =λgpEα,s,Ts [(∥∇D src l (αTs + (1− α)Tt)∥2 − 1)2]
− ETs [Dsrcl (Ts)], (4) LadvGg = EIs,s[D src g (It)], (5)
LadvGl = ETs,s[D src l (Tt)], (6)
where λgp is set as 1, α is set as a random number whose range is between [0, 1] and subject to a uniform distribution in this paper.
Modality classification loss. Given an image and its target modality vector, we hope the generator can generate images that are as close to the target modality as possible. Similar to StarGAN, the discriminator aims to judge the modality they belong to. The difference is we add an extra discriminator for the local branch. For real images, we define the modality classification loss as follows:
Lr clsDg = EIs [−logD cls g (s|Is)], (7)
Lr clsDl = ETs [−logD cls l (s|Ts)], (8)
where the term Dclsg (s|Is) represents a probability distribution over the modality vector for the whole images and the term Dclsl (s|Ts) represents the one for the tumor images. Similarly, we define the modality classification loss for fake images as follows:
Lf clsDg = EIs,s[−logD cls g (t|It)], (9)
Lf clsDl = ETs,s[−logD cls l (t|Tt)]. (10)
Local consistency loss. Tt represents the generated tumor images and T ′t represents the tumor areas of the generated whole image It. Since we hope the segmentation network can better guide the translation of the global branch, we constrain the similarity of these two to alleviate the problem of distortion in brain tumor image translation. Similarly, the reconstructed tumor image T ′s and the source tumor images Ts are supposed to be similar. We propose a local consistency loss as an extra constraint to improve the translation effect, which is defined as follows:
Llocal = E([∥Tt − T ′t∥1]) + E([∥Ts − T ′s∥1]). (11) Reconstruct loss. The model can translate the source image Is to the image It of any modality. However, this does not guarantee that the generated image It just simply changes the image style information and still contains all the content information of the source image Is. To solve this problem, we input It into the translation network for cycle translation to obtain the reconstructed image I ′s. If I ′ s is consistent with Is, the content information is not missed during translation. The reconstruct loss is defined as follows:
Lrec = E[∥Is − I ′s∥1]. (12)
Identity mapping loss. Given an image of arbitrary modality, if the target modality happens to be its source modality, we denote the mapping as Iidt, Tidt = G(Is, Ts, s, s). We hope the generated images to be as consistent as possible with the source images. We use identity mapping loss to enforce the generated image not to lose origin information, which is defined as follows:
Lidt = E[∥Is − Iidt∥1 + E[∥Ts − Tidt∥1]. (13)
Semi-supervised loss. For images that are paired and labeled, we further use ground truth to constrain the generated tumor images to alleviate the problem of local image deformation. We defined the semi-supervised loss as follows:
Lss = E[∥Tt −GT∥1], (14) where GT represents the ground truth of the target tumor images.
Total loss.Combining all the losses mentioned above, we finally defined the objective function as follows:
LD =λ adv Dg L adv Dg + λ adv Dl LadvDl + λ cls DgL r cls Dg
+ λclsDlL r cls Dl , (15)
LG =λ adv Gg L adv Gg + λ adv Gl LadvGl + λ cls DgL fcls Dg
+ λclsDlL fcls Dl + λrecLrec + λlocalLlocal
+ λidtLidt + λssLss, (16) where λadvDg ,λ adv Dl ,λclsDg ,λ cls Dl
,λrec, λlocal, λidt,λss are hyper-parameters to balance to losses. We set λadvDg ,λ adv Dl ,λclsDg ,λ cls Dl to be 1.0 and λrec, λlocal, λidt,λss to be 10.0 in this paper.
4 EXPERIMENTS
4.1 SETTINGS
Datasets. We conduct all our experiments on BRATS2020(Menze et al., 2014)(Bakas et al., 2017)(Bakas et al., 2018) dataset. BRATS2020 provides brain tumor images of four modalities: T1-weighted (T1), T1-with-contrast-enhanced (T1ce), T2-weighted (T2) and T2-fluid-attenuated inversion recovery (Flair). Three kinds of tumors which are named Whole Tumor(WT), Tumor Core(TC) and Enhancing Tumor(ET), are labeled for segmentation. We use 150 patients’ images as the training samples and 20 percent of them are treated as labeled and paired ones. We resize the images to 128*128. Details are shown in the supplementary material.
Evaluation metrics. For the translation task, we use structural similarity index measure(SSIM), peak-signal-noise ratio(PSNR) and learned perceptual image patch similarity(LPIPS)(Zhang et al., 2018a) to measure the similarity between the generated images and ground truth. For the downstream segmentation task, we use DICE to measure the integrity of the predicted pseudo labels which are generated by nnU-Net(Isensee et al., 2021). That’s because nnU-Net is an acknowledged method that achieves state-of-the-art performances for medical image segmentation.
Baselines. We compare our translation results with StarGAN(Choi et al., 2018), DRIT++(Lee et al., 2018), Targan(Chen et al., 2021a) and ReMIC(Shen et al., 2020). StarGAN proposes to use a unified model to translate images to arbitrary modalities. DRIT++ disentangles an image to the content code and the attribute code during the training process and generates images by using the content code extracted from the input images and the attribute code sampled from the standard normal distribution. TarGAN alleviates the problem of image deformation on the target area by utilizing an extra shape controller. ReMIC generates images by using multi-modality paired images. Note that we implement a semi-supervised ReMIC for comparison.
Implementation details. We implement PatchGAN(Isola et al., 2017) as the backbone for both the global discriminator and local discriminator. And we use U-net as the backbone for the generator and the segmentation network. We train our model for 100 epochs with a learning rate of 10−4 for the generator and both the discriminators for the first 50 epochs and linearly decay the learning rate to 10−6 at the final epoch. Adam(Kingma & Ba, 2014) optimizer is used with momentum parameters β1 = 0.9 and β2 = 0.999. We also adopt data augmentation and normalization for the training samples. Details are shown in the supplementary material. All the experiments are conducted on PyTorch with NVIDIA RTX 3090.
4.2 RESULTS
In this section, we demonstrate our translation results compared with other baseline methods. Then we verify the effectiveness of the generated images via the downstream segmentation tasks.
4.2.1 TRANSLATION RESULTS
Qualitative evaluation. Figure 2 shows the qualitative results of our model and the other baselines. StarGAN and DRIT++ generate images with checkerboard artifacts in some cases, while ReMIC and TarGAN may lead to blur or deformation in the tumor areas. Our method generates images with clearer textures and more structural information.
Quantitative evaluation. We use SSIM, PSNR and LPIPS to measure the similarity between the generated images and ground truth.
As shown in Table 1, our method gets higher SSIM and PSNR than the other baselines. The value is the average of all the cases for the mapping of any source modality to an arbitrary target modality. We also use the smallest rectangle to frame the tumor areas for every generated image and calculate the SSIM and PSNR of the framed images with their corresponding ground truth, which is denoted as local SSIM and local PSNR. These two metrics represent the translation effectiveness in the tumor areas. S3TAGAN get higher scores of them, which means that our method preserves more information in the tumor areas. LPIPS is also a metric to measure perceptual similarity. The
lower value of this metric means higher similarity which represents that our model achieves better translation effectiveness.
4.2.2 DOWNSTREAM SEGMENTATION RESULTS
Given an image from an arbitrary modality, we translate it to the other three modalities by our model and all the baselines respectively. Then we put the fully multi-modality images generated by the above methods into nnU-Net to compare their segmentation effectiveness. We use DICE of whole tumor(WT), tumor core(TC) and enhancing tumor(ET) to measure the results. As shown in Table 2, our method achieves better performance than all the baselines, which also represents that we generate more accurate information on the tumor areas.
4.3 EFFECTIVENESS OF LOCAL BRANCH
In this section, we conduct an ablation study to validate the effectiveness of the local branch which is guided by the segmentation network in our proposed S3TAGAN . We replace the predicted pseudo labels with the following three situations: (a)ground truth labels. (b) full-zeros maps. (c) random maps that each value is either zero or one. As shown in Table 3, the performance of S3TAGAN with labels is the upper bound in theory, which represents the best guidance for local tumor translation. Note that the performance of segmentation guidance is close to this situation, which demonstrates the effectiveness of our model. While S3TAGAN with zeros maps is the lower bound in theory, which represents no segmentation-guided learning. S3TAGAN with random maps represents learning with guidance for random areas but not the tumor areas. Translation effectiveness is improved slightly in this situation. The results show the robustness of our model.
4.4 RATIO OF PAIRED AND LABELED DATA
In order to test the effect of the ratio of paired and labeled data for semi-supervised learning on our model, we adjust the ratio to 10 percent and 100 percent. As shown in Table 4, more paired and labeled data for supervision can improve translation effectiveness. Note that the effectiveness of 20 percent supervision which is our default setting is close to 100 percent supervision.
5 CONCLUSION
In this paper, we propose a semi-supervised segmentation-guided method called S3TAGAN to translate brain tumor images, which learns a mapping between any two modalities. We use unpaired images for training with only few paired and labeled ones, which is in agreement with the practical situation. With the guidance of the segmentation network, the local branch focuses on the brain tumor areas and alleviates the problem of deformation in the tumor areas, which benefits the quality of both the generated whole images and tumor images. Experiments demonstrate that our model achieves better translation effectiveness with strong robustness. The results of the downstream segmentation task also verify the effectiveness of our model. | 1. What is the focus and contribution of the paper on tumor-aware generative adversarial networks?
2. What are the strengths of the proposed approach, particularly in addressing the issue of missing modalities and reducing local deformation?
3. What are the weaknesses of the paper regarding its clarity, experimentation, and reconstruction results?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
In this paper, the authors proposed a Semi-Supervised Segmentation-Guided Tumor-Aware Generative Adversarial Network (S3TAGAN) to learn an end-to-end mapping from source modality to target modality using unpaired brain tumor images with few paired and labeled ones. In addition, downstream segmentation tasks further verify the effectiveness of the generated target modality. In general, it is well written and organized.
Strengths And Weaknesses
Strength:
To alleviate the absence of some modalities in multi-modality medical data, the authors generated pseudo-images for the missing modalities to provide more comprehensive information for diagnoses, which benefit the downstream segmentation task.
The global branch and local branch are designed to alleviate the local deformation caused by the cycle consistency loss. Reconstruction of image and source tumor images reduces content information loss during translation from source to target modality.
Qualitative and quantitative results in the multi-modality translation task verify the effectiveness of the proposed S3TAGAN.
Weaknesses:
Although detailed is written, the fluency of the method is still lacking. When introducing the network framework, it is recommended to introduce the various parts of the framework in the forward order. For example, it would be more appropriate to place part of the "Segmentation network" before the "Generator", and the same for the introduction of the loss function. Meanwhile, the overview of the framework (Fig. 1) is a little confusing. The lines of some modules overlap, it is recommended to re-layout or at least change the color of the lines.
The authors argued that the reconstruction of the image and the source tumor images can reduce the loss of content information during the translation process, so what is the quality of the reconstructed images? Is it possible to restore the content information of the original image? It is recommended to show some reconstruction results to increase persuasiveness.
In Section 3.2, the authors detailed the loss functions for various parts of the network and set multiple hyper-parameters to balance the loss. How are the values of these hyper-parameters set? Please elaborate it.
Clarity, Quality, Novelty And Reproducibility
This paper is logically clear, detailed in the description, and relatively complete in experiments, and is a work of moderate quality |
ICLR | Title
Robust Policy Optimization in Deep Reinforcement Learning
Abstract
Entropy can play an essential role in policy optimization by selecting the stochastic policy, which eventually helps better explore the environment in reinforcement learning (RL). A proper balance between exploration and exploitation is challenging and might depend on the particular RL task. However, the stochasticity often reduces as the training progresses; thus, the policy becomes less exploratory. Therefore, in many cases, the policy can converge to sub-optimal due to a lack of representative data during training. Moreover, this issue can even be severe in high-dimensional environments. This paper investigates whether keeping a certain entropy threshold throughout training can help better policy learning. In particular, we propose an algorithm Robust Policy Optimization (RPO), which leverages a perturbed Gaussian distribution to encourage high-entropy actions. We evaluated our methods on various continuous control tasks from DeepMind Control, OpenAI Gym, Pybullet, and IsaacGym. We observed that in many settings, RPO increases the policy entropy early in training and then maintains a certain level of entropy throughout the training period. Eventually, our agent RPO shows consistently improved performance compared to PPO and other techniques such as data augmentation and entropy regularization. Furthermore, in several settings, our method stays robust in performance, while other baseline mechanisms fail to improve and even worsen the performance.
1 INTRODUCTION
Exploration in a high-dimensional environment is challenging due to the online nature of the task. In a reinforcement learning (RL) setup, the agent is responsible for collecting high-quality data. The agent has to decide on taking action which maximizes future return. In deep reinforcement learning, the policy and value functions are often represented as neural networks due to their flexibility in representing complex functions with continuous action space. If explored well, the learned policy will more likely lead to better data collection and, thus, better policy. However, in high-dimensional observation space, the possible trajectories are larger; thus, having representative data is challenging. Moreover, it has been observed that deep RL exhibit the primacy bias, where the agent has the tendency to rely heavily on the earlier interaction and might ignore helpful interaction at the later part of the training (Nikishin et al., 2022).
Maintaining stochasticity in policy is considered beneficial, as it can encourage exploration (Mnih et al., 2016; Ahmed et al., 2019). Entropy is the randomness of the actions, which is expected to go down as the training progress, and thus the policy becomes less stochastic. However, lack of stochasticity might hamper the exploration, especially in the large dimensional environment (high state and action spaces), as the policy can prematurely converge to a suboptimal policy. This scenario might result in low-quality data for agent training. In this paper, we are interested in observing the effect when we maintain a certain level of entropy throughout the training and thus encourage exploration.
We focus on a policy gradient-based approach with continuous action spaces. A common practice (Schulman et al., 2017; 2015) is to represent continuous action as the Gaussian distribution and learn the parameters (µ, and σ) conditioned on the state. The policy can be represented as a neural network, and it takes the state as input and outputs the one Gaussian parameters per action dimension. Then the final action is chosen as a sample from this distribution. This process inherently introduces
randomness in action as every time it samples action for the same state, the action value might differ. Though, in expectation, the action value is the same as the mean of the distribution, this process introduces some randomness in the learning process. However, as time progresses, the randomness might reduce, and the policy becomes less stochastic.
We first notice that when this method is used to train a PPO-based agent, the entropy of the policy starts to decline as the training progresses, as in Figure 1, even when the return performance is not good. Then we pose the question; what if the agent keeps the policy stochastic or entropy throughout the training? The goal is to enable the agent to keep exploring even when it achieves a certain level of learning. This process might help, especially in high-dimensional environments where the state and action spaces often remain unexplored. We developed an algorithm called Robust Policy Optimization (RPO), which maintains stochasticity throughout the training. We notice a consistent improvement in the performance of our method in many continuous control environments compared to standard PPO.
Seeing the data augmentation through the lens of entropy, we observe that empirically, it can help the policy achieve a higher entropy than without data augmentation. However, this process often requires prior knowledge about the environments and a preprocessing step of the agent experience. Moreover, such methods might result in an uncontrolled increase in action entropy, eventually hampering the return performance (Raileanu et al., 2020; Rahman & Xue, 2022). Another way to control entropy is to use an entropy regularizer (Mnih et al., 2016; Ahmed et al., 2019), which often shows beneficial effects. However, it has been observed that increasing entropy in such a way has little effect in specific environments (Andrychowicz et al., 2020). These results show difficulty in setting proper entropy throughout the agent’s training.
To this end, in this paper, we propose a mechanism for maintaining entropy throughout the training. We propose to use a new distribution to represent continuous action instead of standard Gaussian. The policy network output is still the Gaussian distribution’s mean and standard deviation in our setup. However, we add a random perturbation on the mean before taking an action sample. In particular, we add a random value z ∼ U(−α, α) to the mean µ to get a perturbed mean µ′ = µ+ z. Finally, the action is taken from the perturbed Gaussian distribution a ∼ N(µ′, σ). The resulting distribution is shown in Figure 1. We see that the resulting distribution becomes flatter than the standard Gaussian. Thus the sample spread more around the center of the mean than standard Gaussian, whose samples are more concentrated toward means. The uniform random number does not depend on states and policy parameters and thus can help the agent to maintain a certain level of stochasticity throughout the training process. We name our approach Robust Policy Optimization (RPO) and compare it with the standard PPO algorithm and other entropy-controlled methods such as data augmentation and entropy regularization.
We evaluated our method in several environments from DeepMind Control (Tunyasuvunakool et al., 2020), OpenAI Gym (Brockman et al., 2016), PyBullet (Coumans & Bai, 2016–2021), and Nvidia IsaacGym (Makoviychuk et al., 2021). We observed that our method RPO performs consistently better than PPO, and the performance improvement shows larger in high-dimensional environments. Moreover, RPO outperform two data augmentation-based method: RAD (Laskin et al., 2020) and DRAC (Raileanu et al., 2020). In addition to that, our method is simple and free from the type of data augmentation assumption and data preprocessing; still, our method achieves better empirical performance than the data augmentation-based methods in many environments.
Further, we tested our method against the entropy regularization method, where the entropy of a policy is controlled by a coefficient weight of the policy entropy. Empirically, we observe that our method RPO performs better in most environments, and some choice of coefficient eventually leads to worse performance. Moreover, we observe that even when the agent is trained for a large simulation experience, the performance might not stay consistent. In particular, in IsaacGym environments, the agent has access to a large sample due to the high-performing simulation on GPU. This abundance of data may not necessarily improve or maintain the performance. In the classic CartPole environment, we observe that PPO agents quickly achieve a certain performance; however, it then fails to maintain it, and the performance quickly drops as we train for more. Similar results have been found in the OpenAI BipedalWalker environments. On the other hand, our method RPO shows robustness to more data and keeps improving or maintaining a similar performance as we keep training agents for more data.
In summary, we make the following contributions:
• We investigate whether keeping the policy stochastic throughout the training is beneficial for the RL training.
• We propose an algorithm, Robust Policy Optimization (RPO), that uses a perturbed Gaussian distribution to represent actions and is empirically shown to maintain a certain level of policy entropy throughout the training.
• We evaluate our method on 18 tasks from four RL benchmarks. Evaluation results show that our method RPO consistently performs better than standard PPO, two data argumentation-based methods RAD and DRAC, and entropy regularization.
2 PRELIMINARIES AND PROBLEM SETTINGS
Markov Decision Process (MDP). An MDP is defined as a tuple M = (S,A,P,R). Here an agent interacts with the environment and take an action at ∈ A at a discrete timestep t when at state st ∈ S. After that the environment moves to next state st+1 ∈ S based on the dynamic transition probabilities P(st+1|st, at) and the agent recieves a reward rt according to the reward functionR. Reinforcement Learning In a reinforcement learning framework, the agent interacts with the MDP, and the agent’s goal is to learn a policy π ∈ Π that maximizes cumulative reward, where Π is the set of all possible policies. Given a state, the agent prescribes an action according to the policy π, and an optimal policy π∗ ∈ Π has the highest cumulative rewards. Policy Gradient. The policy gradient method is a class of reinforcement learning algorithms where the objective is formulated to optimize the cumulative future return directly. In a deep reinforcement learning setup, the policy can be represented as a neural network that takes the state as input and output actions. The objective of this policy network is to optimize for cumulative rewards. In this paper, we focus on a particular policy gradient method, Proximal Policy Optimization (PPO) (Schulman et al., 2017), which is similar to Natural Policy Gradient (Kakade, 2001) and Trust Region Policy Optimization (TRPO) (Schulman et al., 2015). The following is the objective of the PPO:
Lπ = −Et[ πθ(at|st) πθold(at|st) A(st, at)] (1)
, where πθ(at|st) is the probability of choosing action at given state st at timestep t using the current policy parameterized by θ. On the other hand, the πθold(at|st) refer to the probabilities of using an old policy parameterized by previous parameter values θold. The advantage A(st, at) is an estimation which is the advantage of taking action at at st compared to average return from that state.
In PPO, a surrogate objective is optimized by applying clipping on the current and old policy ratio on equation 1. In the case of the discrete action, the output is often used as the categorical distribution, where the dimension is the number of actions. On the other hand, in continuous action cases, the output can correspond to parameters of Gaussian distribution, where each action dimension is represented by mean and variance. Finally, the action is taken as the sample from this Gaussian distribution. In this paper, we mainly focus on the continuous action case and assume the Gaussian distribution for action.
Entropy Measure Entropy is a measure of uncertainty in the random variable. In the context of our discussion, we measure the entropy of a policy from the action distribution it produces given a state. The more entropic an action distribution is, the more stochastic the decision would be as the final action is eventually a sample from the output action distribution. This measure can be seen as the policy’s stochasticity, and higher entropy distribution allows agents to explore the state space more. Thus, entropy can play a role in controlling the amount of exploration the agent performs during learning. In this paper, we focus on the entropy measure and refer to the stochasticity of policy by this measure.
3 ROBUST POLICY OPTIMIZATION (RPO)
Exploration at the start of training can help an agent to collect diverse and representative data. However, as time progresses, the agent might start to reduce the exploration and try to be more certain about a state. In standard PPO, we observe that the agent shows more randomness in action at the beginning (see Figure 1), and it becomes smaller as the training progresses. We measure this randomness in the form of entropy, which indicates the randomness of the actions taken. In the continuous case, a practical approach is to represent the action as a parameter of Gaussian distribution. The entropy of this distribution represents the amount of randomness in action, which also indicates how much the policy can explore new states.
However, in Gaussian distribution, the samples are concentrated toward the mean; thus, it can limit the exploration in some RL environments. This paper presents an alternative to this Gaussian distribution and proposes a new distribution that accounts for higher entropy than the standard Gaussian. The goal is to keep the entropy higher or at some levels throughout the agent training. In particular, we propose to combine Gaussian Distribution and Uniform distribution. At each timestep of the algorithm training, we perturb the mean µ of Gaussian N (µ, σ) by adding a random number drawn from the Uniform U(−α, α) and get the perturbed mean µ′. Then the sample for action is taken from the new distributionN (µ′, σ). In this setup, the Gaussian parameters still depend on the state, similar to the standard setup. However, the Uniform distribution does not depend on the states. Thus as training progresses, the standard Gaussian distribution can be less entropic as the policy optimizes to be more confident in particular states. However, the state-independent perturbation using the Uniform distribution still enables the policy to maintain stochasticity.
Figure 1 shows the diagram of the resulting Gaussian distributions. This diagram is generated by first sampling data from a Gaussian distribution with mean µ = 0.0 and standard deviation σ = 1.0 and then corresponding perturbed mean µ′ = µ + z and standard deviation σ = 1.0, where z ∼ U(−3.0, 3.0). We see that the values are less centered around the mean for our Perturb Gaussian than the standard. Therefore, samples from our proposed distribution should have more entropy than the standard Gaussian.
Algorithms 1 shows the details procedure of our RPO method. The agent first collects experience trajectory data using the current policy and stores it in a buffer D (lines 2 to 10). Next, the agent uses the standard Gaussian to sample the action in these interaction steps (line 6). Finally, these experience data are used to update the policy, optimizing for policy parameter θ. In this case, given a state st, the policy network outputs µ and σ for each action. Then the µ is perturbed by adding a value z, which samples from a Uniform distribution U(−α, α) (lines 13 and 14 in blue). After that, the probability is computed from the new distributionN (µ′, σ), which is eventually used to calculate action log probabilities that are used for computing loss Lπ . In the case of the α hyperparameter, we report results using α = 0.5 unless otherwise specified.
Algorithm 1 Robust Policy Optimization (RPO) 1: Initialize parameter vectors θ for policy network. 2: for each iteration do 3: D ← {} 4: for each environment step do 5: µ, σ ← πθ(.|st) 6: at ∼ N (µ, σ) 7: st+1 ∼ P (st+1|st, at) 8: rt ∼ R(st, at) 9: D ← D ∪ {(st, at, rt, st+1)} 10: end for 11: for each observation st in D do 12: µ, σ ← πθ(.|st) 13: z ∼ U(−α, α) 14: µ′ ← µ+ z 15: prob← N (µ′, σ) 16: logp← prob(at) 17: Compute RL loss Lπ using logp, at, and value function. 18: end for 19: end for
4 EXPERIMENTS
4.1 SETUP
Environments We conducted experiments on continuous control task from four reinforcement learning benchmarks: DeepMind Control (Tunyasuvunakool et al., 2020), OpenAI Gym (Brockman et al., 2016), PyBullet (Coumans & Bai, 2016–2021), and Nvidia IsaacGym (Makoviychuk et al., 2021). These benchmarks contain diverse environments with many different tasks, from low to high-dimensional environments (observations and actions space). Thus our evaluation contains a diverse set of tasks with various difficulties. The IsaacGym contains environments that run in GPU, thus enabling fast simulation, which eventually helps collect a large amount of simulation experience quickly, and faster RL training with GPU enables deep reinforcement learning models.
Baselines We compare our method RPO with the PPO (Schulman et al., 2017) algorithm. Here our method RPO uses the perturbed Gaussian distribution to represent the action output from the policy network, as described in Section 3. In contrast, the PPO uses standard Gaussian distribution to represent its action output. Further, we observe that the data augmentation method can help increase the policy’s entropy by often randomly perturbing observations. This process might improve the performance where higher entropy is preferred. Thus, we compare our method with two data augmentation-based methods: RAD (Laskin et al., 2020), and DRAC (Raileanu et al., 2020). Here, The pure data augmentation baseline RAD uses data processing before passing it to the agent, and the DRAC uses data augmentation to regularize the value and policy network. Both of these data augmentation methods use PPO as their base RL policy. Another common approach to increase entropy is to use the Entropy Regularization in the RL objective. A coefficient determines how much weight the policy would give to the entropy. We observe that various weighting might result in different levels of entropy increment. We use the entropy coefficient 0.0, 0.01, 0.05, 0.5, 1.0, and 10.0 and compare their performance in entropy and, in return, with our algorithm’s RPO. Note that our method does not introduce any additional hyper-parameters and does not use the entropy coefficient hyper-parameters. The implementation details of our algorithm and baselines are in the Appendix. To account for the stochasticity in the environment and policy, we run each experiment several times and report the mean and standard deviation. Unless otherwise specified, we run each experiment with 10 random seeds.
4.2 RESULTS
We compare the return performance of our method with baselines and show their entropy to contrast the results with the entropy of the policy.
Comparison with PPO This evaluation is the direct form of comparison where no method uses any aid (such as data augmentation or entropy regularization) in the entropy value. Figure 2 shows results comparison on DeepMind Control Environments. Results on more environments are in Appendix Figure 11. The entropy comparison is given in the Appendix Figure 12. In most scenarios, our method RPO shows consistent performance improvement in these environments compared to the PPO. In some environments, such as humanoid stand, humanoid run, and hopper hop, the PPO agent fails to learn any useful behavior and thus results in low episodic return. In contrast, our method RPO shows better performance and achieves a much higher episodic return. The RPO also shows a better mean return than the PPO in other environments. In many settings, such as in quadruped (walk, run, and escape), walker (stand, walk, run), fish swim, acrobot swingup, the PPO agents stop improving the performance after around 2M timestep. In contrast, our agent RPO shows consistent improvement over time of the training. This performance gain might be due to the proper management of policy entropy, as in our setup, the agent is encouraged to keep exploring as the training progress. On the other hand, the PPO agent might settle in a sub-optimal performance as the policy entropy, in this case, decreases as the agent trains for more timesteps. These results show the effectiveness of our method in diverse control tasks with varying complexity.
Results of OpenAI Gym environments: Pendulum and BipedalWalker are in Figure 3. Overall, our method, RPO, performs better compared to the PPO. In Pendulum environments, the PPO agent fails to learn any useful behavior in this setup. In contrast, RPO consistently learns with the increase in timestep and eventually learns the task. We see the policy entropy of RPO increases initially and eventually remains at a certain threshold, which might help the policy to stay exploratory and collect more data. In contrast, the PPO policy entropy decreases over time, and thus eventually, the performance remains the same, and the policy stops learning. This scenario might contribute to the bad performance of the PPO.
In the BipedalWalker environment, we see that both PPO and RPO learn up to a certain reward quickly. However, as we keep training both policies, we observe that after a certain period, the PPO’s performance drops and even starts to become worse. In contrast, the RPO stays robust as we train for more and eventually keep improving the performance. These results show the robustness of our method when ample train time is available. The entropy plot shows a similar pattern, as PPO decreases entropy over time, and RPO keeps the entropy at a certain threshold.
Figure 4 shows results comparison on PyBullet environments: Ant, and Minituar. We observe that our method RPO performs better than the PPO in the Ant environment. In the Minitaur environment, PPO quickly (at around 2M) learns up to a certain reward and remains on the same performance as time progresses. In contrast, RPO starts from a lower performance, eventually surpassing the PPO’s performance as time progresses. These results show the robustness of consistently improving the
policy of the RPO method. The entropy pattern remains the same in both cases; PPO reduces entropy while RPO keeps the entropy at a certain threshold which it learns automatically in an environment.
Figure 5 shows results comparison on IsaacGym environments: Cartpole, and BallBalance. In this setup, we run the simulation up to 100M timesteps which take around 30 minutes for each run in each environment in a Quadro RTX 4000 GPU. We see that for Cartpole, both PPO and RPO learn the reward quickly, around 450. However, as we kept training for a long, the performance of PPO started to degrade over time, and the policy entropy kept decreasing. On the other hand, our RPO agent keeps improving the performance; notably, the performance never degrades over time. The policy entropy shows that the entropy remains at a threshold. This exploratory nature of RPO’s policy might help keep learning and get better rewards. These results show the robustness of our method RPO over PPO even when an abundance of simulation is available. Interestingly, more simulation data might not always be good for RL agents. In our setup, the PPO even suffers from further training in the Cartpole environment. In the BallBalance environment (results are averaged over 3 random seed runs), our method RPO achieves a slight performance improvement over PPO. Overall, our method RPO performs better than the PPO in the two IsaacGym environments.
Comparison with Data Augmentation. In results on all DeepMind Control environments are shown in Table 1. Our method performs better in mean episodic return in most environments than PPO and other data augmentation baselines RAD and DRAC.
The return and policy entropy curve are shown in Appendix, Figure 10. We observe that the data augmentation slightly improved the base PPO algorithms, and the policy entropy shows higher than the base PPO. However, our method RPO still maintains a better mean return than all the baselines. The entropy of our method shows an increase at the initial timestep of the training. However, it eventually becomes stable at a particular value. The data augmentation method, especially RAD, shows an increase in entropy throughout the training process. However, this increase does not translate to the return performance. Moreover, data augmentation assumes some prior knowledge about the environment observation, and selecting a suitable data augmentation method requires domain knowledge. Improper handling of the data augmentation may result in worsen performance.
In addition, the augmentation method poses an additional processing time which can contribute to longer training time. In contrast, our method RPO does not require such domain knowledge and is thus readily applicable to any RL environment. Moreover, in return performance, our method shows better performance compared to the base PPO and these data augmentation methods.
Comparison with Entropy Regularization
Due to the variety in the environments, the entropy requirement might be different. Thus, improper setting of the entropy coefficient might result in bad training performance. In the tested environments, we observe e that sometimes the performance improves with a proper setup of tuned coefficient value and often worsens the performance (in Figure 6). However, our method RPO consistently performs better than PPO and the different entropy coefficient baselines. Importantly, our method does not use these coefficient hyper-parameters and still consistently performs better in various environments, which shows our method’s robustness in various environment variability.
We observe that in some environments, the coefficient 0.01 improves the performance of standard PPO with coefficient 0.0 while increasing the entropy. However, an increase in entropy to 0.05 and
above results in an unbounded entropy increase. Thus, the performance worsens in most scenarios where the agent fails to achieve a reasonable return. Results with all the coefficient variants in the Appendix Figure 9. Overall, our method RPO achieves better performance in the evaluated environments in our setup. Moreover, our method does not use the entropy coefficient hyperparameter and controls the entropy level automatically in each environment.
Ablation Study We conducted experiments on the α value ranges in the Uniform distribution. Figure 7 shows the return and policy comparison. We observe that the value of α affects the policy entropy and, thus, return performance. A smaller value of α (e.g., 0.001) seems to behave similarly to PPO, where policy entropy decreases over time, thus hampered performance. Higher entropy values, such as 1000.0, make the policy somewhat random as the uniform distribution dominates over the Gaussian distribution (as in Algorithm 1. This scenario keeps the entropy somewhat at a constant level; thus, the performance is hampered. Overall, a value between 0.1 to 3 often results in better performance. Due to overall performance advantage, in this paper, we report results with α = 0.5 for all 18 environments. Learning curve for more environments are in Appendix Figure 13.
5 RELATED WORK
Since the inception of the practical policy optimization method PPO (TRPO and Natural Policy Gradient), several studies have investigated different algorithmic components (Engstrom et al., 2019; Andrychowicz et al., 2020; Ahmed et al., 2019; Ilyas et al., 2019). Entropy regularization enjoys faster convergence and improves policy optimization (Williams & Peng, 1991; Mei et al., 2020; Mnih et al., 2016; Ahmed et al., 2019). However, some empirical findings observe the difficulty in setting proper entropy setup and observe no performance gain in many environments (Andrychowicz et al., 2020). Gaussian distribution is commonly used to represent continuous action (Schulman et al., 2015; 2017). Thus many follow-ups to implementation also use this Gaussian distribution as s default setup (Huang et al., 2022; 2021). In contrast, in this paper, we propose perturbed Gaussian distribution to represent the continuous action to maintain policy entropy. Furthermore, in this paper, we showed the empirical advantage of our method in several benchmark environments compared to the standard Gaussian-based method. Due to space limitations, we moved some related work discussions to the Appendix.
6 CONCLUSION
This paper investigates the effect of entropy in policy optimization-based reinforcement learning. We design a method that, when applied to a policy network, can keep the entropy of the policy to a certain threshold through training. Our approach uses a perturbed Gaussian distribution by a random number sample from a uniform distribution to represent the policy. We observe that keeping a certain entropy threshold throughout using our method can help better policy learning. Our proposed algorithm Robust Policy Optimization (RPO), performs better than the standard PPO on 18 continuous control tasks from four RL benchmarks, DeepMind Control, OpenAI Gym, Pybullet, and IsaacGym. Further, our method performs better in many settings than other approaches, encouraging higher entropy, such as data augmentation and entropy regularization. Overall, we show our method’s effectiveness and robustness where standard PPO or other baseline methods fail to maintain or even worsen the performance over time.
A APPENDIX
A.1 ADDITIONAL RELATED WORK
Another form of approach which adds entropy to the policy is data augmentation. We observe that the policy entropy often increases when the data is augmented often by some form of random processes. This improvement in entropy can eventually lead to better exploration in some environments. However, the type of data augmentation might be environment specific and thus requires domain knowledge as to what kind of data augmentation can be applied. In contrast, to this method, our method maintains an entropy threshold during entire policy learning. Moreover, we evaluated our algorithm with two kinds of data augmentation-based methods RAD (Laskin et al., 2020) and DRAC (Raileanu et al., 2020). Finally, we evaluate with an effective data augmentation approach random amplitude modulation, which was found to be worked well in vector-based states set up.
Many other forms of improvement have been investigated in policy optimization, which results in better empirical success, such as Generalized Advantage Estimation (Schulman et al., 2016), Normalization of Advantages (Andrychowicz et al., 2020), and clipped policy and value objective (Schulman et al., 2017; Engstrom et al., 2019; Andrychowicz et al., 2020). These improvements eventually led to overall improvement and are now used in many standard implementations such as (Huang et al., 2022; 2021). In our implementation, we leverage these implementation tricks when possible. Thus our method works on top of these essential improvements. Furthermore, our method is complementary to these approaches and thus can be used along with this method. In this paper, our implementation of baselines also used these implementation tricks, which is for a fair comparison.
A.2 EXPERIMENTS
Implementation Details: Our algorithm and the baselines are based on the PPO (Schulman et al., 2017) implementation available in (Huang et al., 2022; 2021). This implementation incorporated many important advancements from existing literature in recent years on policy gradient (e.g., Orthogonal Initialization, GAE, Entropy Regularization). We refer reader to (Huang et al., 2022) for further references. The pure data augmentation baseline RAD (Laskin et al., 2020) uses data processing before passing it to the agent, and the DRAC (Raileanu et al., 2020) uses data augmentation to regularize the loss of value and policy network. We experimented with vector-based states and used random amplitude scaling proposed in RAD (Laskin et al., 2020) as a data augmentation method for RAD and DRAC. In the random amplitude scaling, the state values are multiplied with random values generated uniformly between a range α to β. We used the suggested (Laskin et al., 2020) and better performing range α = 0.6 to β = 1.2 for all the experiments. Moreover, both RAD and DRAC use PPO as their base algorithm. However, our RPO method does not use any form of data augmentation.
We use the hyperparameters reported in the PPO implementation of continuous action spaces (Huang et al., 2022; 2021), which incorporate best practices in the continuous control task. Furthermore, to mitigate the effect of hyperparameters choice, we keep them the same for all the environments. Further, we keep the same hyperparameters for all agents for a fair comparison. The common hyperparameters can be found in Table 2.
Entropy Regularization The results comparison of RPO with entropy coefficient are in Figure 8
Data augmentation return curve and entropy: Figure 10 shows data augmentation return curve and entropy comparison with RPO.
Results on DeepMind Control is in Figure 11.
Entropy Plot for DeepMind Control is in Figure 12.
Ablation Study Ablation on α values of RPO are shown in Figure 13. | 1. What is the focus and contribution of the paper on robotics?
2. What are the strengths of the proposed algorithm RPO, particularly in terms of its simplicity and effectiveness?
3. What are the weaknesses of the paper regarding the usage of the term "robust" and the lack of explanatory power in the experiments?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper introduces a new algorithm RPO which builds on top of PPO. The algorithm takes the mean output of a Gaussian distribution, adds a uniform perturbation to the mean, then samples actions from the resulting Gaussian distribution with the perturbed mean. The proposed method is shown to be effective on a variety of high dimensional continuous control tasks.
Strengths And Weaknesses
Strength:
Proposed method is extremely straightforward, easy to implement, and appears to show good results.
Paper was easy to read and clearly written.
Weakness:
My initial problem with the paper is the word “robust” since the term itself is quite overloaded and can have different meanings in different fields (e.g. robust control has a very specific meaning). The authors’ use of the robustness throughout the paper seem quite vague and note well defined.
While the experiments are quite detailed and do show improvements on a set of challenging tasks over the benchmark, I do not believe they provided us with a better understanding of why the proposed approach worked well. Especially since the idea that its beneficial to maintain a certain degree of high entropy through training seems somewhat counter-intuitive in many scenarios. I believe some toy examples would be more helpful in this regard in showing us why/under what conditions the proposed method works well, I think this is something that would greatly enhance the quality of the paper.
Clarity, Quality, Novelty And Reproducibility
Clear to read, I do not see any major issues with reproducibility. The proposed method appears to be novel. |
ICLR | Title
Robust Policy Optimization in Deep Reinforcement Learning
Abstract
Entropy can play an essential role in policy optimization by selecting the stochastic policy, which eventually helps better explore the environment in reinforcement learning (RL). A proper balance between exploration and exploitation is challenging and might depend on the particular RL task. However, the stochasticity often reduces as the training progresses; thus, the policy becomes less exploratory. Therefore, in many cases, the policy can converge to sub-optimal due to a lack of representative data during training. Moreover, this issue can even be severe in high-dimensional environments. This paper investigates whether keeping a certain entropy threshold throughout training can help better policy learning. In particular, we propose an algorithm Robust Policy Optimization (RPO), which leverages a perturbed Gaussian distribution to encourage high-entropy actions. We evaluated our methods on various continuous control tasks from DeepMind Control, OpenAI Gym, Pybullet, and IsaacGym. We observed that in many settings, RPO increases the policy entropy early in training and then maintains a certain level of entropy throughout the training period. Eventually, our agent RPO shows consistently improved performance compared to PPO and other techniques such as data augmentation and entropy regularization. Furthermore, in several settings, our method stays robust in performance, while other baseline mechanisms fail to improve and even worsen the performance.
1 INTRODUCTION
Exploration in a high-dimensional environment is challenging due to the online nature of the task. In a reinforcement learning (RL) setup, the agent is responsible for collecting high-quality data. The agent has to decide on taking action which maximizes future return. In deep reinforcement learning, the policy and value functions are often represented as neural networks due to their flexibility in representing complex functions with continuous action space. If explored well, the learned policy will more likely lead to better data collection and, thus, better policy. However, in high-dimensional observation space, the possible trajectories are larger; thus, having representative data is challenging. Moreover, it has been observed that deep RL exhibit the primacy bias, where the agent has the tendency to rely heavily on the earlier interaction and might ignore helpful interaction at the later part of the training (Nikishin et al., 2022).
Maintaining stochasticity in policy is considered beneficial, as it can encourage exploration (Mnih et al., 2016; Ahmed et al., 2019). Entropy is the randomness of the actions, which is expected to go down as the training progress, and thus the policy becomes less stochastic. However, lack of stochasticity might hamper the exploration, especially in the large dimensional environment (high state and action spaces), as the policy can prematurely converge to a suboptimal policy. This scenario might result in low-quality data for agent training. In this paper, we are interested in observing the effect when we maintain a certain level of entropy throughout the training and thus encourage exploration.
We focus on a policy gradient-based approach with continuous action spaces. A common practice (Schulman et al., 2017; 2015) is to represent continuous action as the Gaussian distribution and learn the parameters (µ, and σ) conditioned on the state. The policy can be represented as a neural network, and it takes the state as input and outputs the one Gaussian parameters per action dimension. Then the final action is chosen as a sample from this distribution. This process inherently introduces
randomness in action as every time it samples action for the same state, the action value might differ. Though, in expectation, the action value is the same as the mean of the distribution, this process introduces some randomness in the learning process. However, as time progresses, the randomness might reduce, and the policy becomes less stochastic.
We first notice that when this method is used to train a PPO-based agent, the entropy of the policy starts to decline as the training progresses, as in Figure 1, even when the return performance is not good. Then we pose the question; what if the agent keeps the policy stochastic or entropy throughout the training? The goal is to enable the agent to keep exploring even when it achieves a certain level of learning. This process might help, especially in high-dimensional environments where the state and action spaces often remain unexplored. We developed an algorithm called Robust Policy Optimization (RPO), which maintains stochasticity throughout the training. We notice a consistent improvement in the performance of our method in many continuous control environments compared to standard PPO.
Seeing the data augmentation through the lens of entropy, we observe that empirically, it can help the policy achieve a higher entropy than without data augmentation. However, this process often requires prior knowledge about the environments and a preprocessing step of the agent experience. Moreover, such methods might result in an uncontrolled increase in action entropy, eventually hampering the return performance (Raileanu et al., 2020; Rahman & Xue, 2022). Another way to control entropy is to use an entropy regularizer (Mnih et al., 2016; Ahmed et al., 2019), which often shows beneficial effects. However, it has been observed that increasing entropy in such a way has little effect in specific environments (Andrychowicz et al., 2020). These results show difficulty in setting proper entropy throughout the agent’s training.
To this end, in this paper, we propose a mechanism for maintaining entropy throughout the training. We propose to use a new distribution to represent continuous action instead of standard Gaussian. The policy network output is still the Gaussian distribution’s mean and standard deviation in our setup. However, we add a random perturbation on the mean before taking an action sample. In particular, we add a random value z ∼ U(−α, α) to the mean µ to get a perturbed mean µ′ = µ+ z. Finally, the action is taken from the perturbed Gaussian distribution a ∼ N(µ′, σ). The resulting distribution is shown in Figure 1. We see that the resulting distribution becomes flatter than the standard Gaussian. Thus the sample spread more around the center of the mean than standard Gaussian, whose samples are more concentrated toward means. The uniform random number does not depend on states and policy parameters and thus can help the agent to maintain a certain level of stochasticity throughout the training process. We name our approach Robust Policy Optimization (RPO) and compare it with the standard PPO algorithm and other entropy-controlled methods such as data augmentation and entropy regularization.
We evaluated our method in several environments from DeepMind Control (Tunyasuvunakool et al., 2020), OpenAI Gym (Brockman et al., 2016), PyBullet (Coumans & Bai, 2016–2021), and Nvidia IsaacGym (Makoviychuk et al., 2021). We observed that our method RPO performs consistently better than PPO, and the performance improvement shows larger in high-dimensional environments. Moreover, RPO outperform two data augmentation-based method: RAD (Laskin et al., 2020) and DRAC (Raileanu et al., 2020). In addition to that, our method is simple and free from the type of data augmentation assumption and data preprocessing; still, our method achieves better empirical performance than the data augmentation-based methods in many environments.
Further, we tested our method against the entropy regularization method, where the entropy of a policy is controlled by a coefficient weight of the policy entropy. Empirically, we observe that our method RPO performs better in most environments, and some choice of coefficient eventually leads to worse performance. Moreover, we observe that even when the agent is trained for a large simulation experience, the performance might not stay consistent. In particular, in IsaacGym environments, the agent has access to a large sample due to the high-performing simulation on GPU. This abundance of data may not necessarily improve or maintain the performance. In the classic CartPole environment, we observe that PPO agents quickly achieve a certain performance; however, it then fails to maintain it, and the performance quickly drops as we train for more. Similar results have been found in the OpenAI BipedalWalker environments. On the other hand, our method RPO shows robustness to more data and keeps improving or maintaining a similar performance as we keep training agents for more data.
In summary, we make the following contributions:
• We investigate whether keeping the policy stochastic throughout the training is beneficial for the RL training.
• We propose an algorithm, Robust Policy Optimization (RPO), that uses a perturbed Gaussian distribution to represent actions and is empirically shown to maintain a certain level of policy entropy throughout the training.
• We evaluate our method on 18 tasks from four RL benchmarks. Evaluation results show that our method RPO consistently performs better than standard PPO, two data argumentation-based methods RAD and DRAC, and entropy regularization.
2 PRELIMINARIES AND PROBLEM SETTINGS
Markov Decision Process (MDP). An MDP is defined as a tuple M = (S,A,P,R). Here an agent interacts with the environment and take an action at ∈ A at a discrete timestep t when at state st ∈ S. After that the environment moves to next state st+1 ∈ S based on the dynamic transition probabilities P(st+1|st, at) and the agent recieves a reward rt according to the reward functionR. Reinforcement Learning In a reinforcement learning framework, the agent interacts with the MDP, and the agent’s goal is to learn a policy π ∈ Π that maximizes cumulative reward, where Π is the set of all possible policies. Given a state, the agent prescribes an action according to the policy π, and an optimal policy π∗ ∈ Π has the highest cumulative rewards. Policy Gradient. The policy gradient method is a class of reinforcement learning algorithms where the objective is formulated to optimize the cumulative future return directly. In a deep reinforcement learning setup, the policy can be represented as a neural network that takes the state as input and output actions. The objective of this policy network is to optimize for cumulative rewards. In this paper, we focus on a particular policy gradient method, Proximal Policy Optimization (PPO) (Schulman et al., 2017), which is similar to Natural Policy Gradient (Kakade, 2001) and Trust Region Policy Optimization (TRPO) (Schulman et al., 2015). The following is the objective of the PPO:
Lπ = −Et[ πθ(at|st) πθold(at|st) A(st, at)] (1)
, where πθ(at|st) is the probability of choosing action at given state st at timestep t using the current policy parameterized by θ. On the other hand, the πθold(at|st) refer to the probabilities of using an old policy parameterized by previous parameter values θold. The advantage A(st, at) is an estimation which is the advantage of taking action at at st compared to average return from that state.
In PPO, a surrogate objective is optimized by applying clipping on the current and old policy ratio on equation 1. In the case of the discrete action, the output is often used as the categorical distribution, where the dimension is the number of actions. On the other hand, in continuous action cases, the output can correspond to parameters of Gaussian distribution, where each action dimension is represented by mean and variance. Finally, the action is taken as the sample from this Gaussian distribution. In this paper, we mainly focus on the continuous action case and assume the Gaussian distribution for action.
Entropy Measure Entropy is a measure of uncertainty in the random variable. In the context of our discussion, we measure the entropy of a policy from the action distribution it produces given a state. The more entropic an action distribution is, the more stochastic the decision would be as the final action is eventually a sample from the output action distribution. This measure can be seen as the policy’s stochasticity, and higher entropy distribution allows agents to explore the state space more. Thus, entropy can play a role in controlling the amount of exploration the agent performs during learning. In this paper, we focus on the entropy measure and refer to the stochasticity of policy by this measure.
3 ROBUST POLICY OPTIMIZATION (RPO)
Exploration at the start of training can help an agent to collect diverse and representative data. However, as time progresses, the agent might start to reduce the exploration and try to be more certain about a state. In standard PPO, we observe that the agent shows more randomness in action at the beginning (see Figure 1), and it becomes smaller as the training progresses. We measure this randomness in the form of entropy, which indicates the randomness of the actions taken. In the continuous case, a practical approach is to represent the action as a parameter of Gaussian distribution. The entropy of this distribution represents the amount of randomness in action, which also indicates how much the policy can explore new states.
However, in Gaussian distribution, the samples are concentrated toward the mean; thus, it can limit the exploration in some RL environments. This paper presents an alternative to this Gaussian distribution and proposes a new distribution that accounts for higher entropy than the standard Gaussian. The goal is to keep the entropy higher or at some levels throughout the agent training. In particular, we propose to combine Gaussian Distribution and Uniform distribution. At each timestep of the algorithm training, we perturb the mean µ of Gaussian N (µ, σ) by adding a random number drawn from the Uniform U(−α, α) and get the perturbed mean µ′. Then the sample for action is taken from the new distributionN (µ′, σ). In this setup, the Gaussian parameters still depend on the state, similar to the standard setup. However, the Uniform distribution does not depend on the states. Thus as training progresses, the standard Gaussian distribution can be less entropic as the policy optimizes to be more confident in particular states. However, the state-independent perturbation using the Uniform distribution still enables the policy to maintain stochasticity.
Figure 1 shows the diagram of the resulting Gaussian distributions. This diagram is generated by first sampling data from a Gaussian distribution with mean µ = 0.0 and standard deviation σ = 1.0 and then corresponding perturbed mean µ′ = µ + z and standard deviation σ = 1.0, where z ∼ U(−3.0, 3.0). We see that the values are less centered around the mean for our Perturb Gaussian than the standard. Therefore, samples from our proposed distribution should have more entropy than the standard Gaussian.
Algorithms 1 shows the details procedure of our RPO method. The agent first collects experience trajectory data using the current policy and stores it in a buffer D (lines 2 to 10). Next, the agent uses the standard Gaussian to sample the action in these interaction steps (line 6). Finally, these experience data are used to update the policy, optimizing for policy parameter θ. In this case, given a state st, the policy network outputs µ and σ for each action. Then the µ is perturbed by adding a value z, which samples from a Uniform distribution U(−α, α) (lines 13 and 14 in blue). After that, the probability is computed from the new distributionN (µ′, σ), which is eventually used to calculate action log probabilities that are used for computing loss Lπ . In the case of the α hyperparameter, we report results using α = 0.5 unless otherwise specified.
Algorithm 1 Robust Policy Optimization (RPO) 1: Initialize parameter vectors θ for policy network. 2: for each iteration do 3: D ← {} 4: for each environment step do 5: µ, σ ← πθ(.|st) 6: at ∼ N (µ, σ) 7: st+1 ∼ P (st+1|st, at) 8: rt ∼ R(st, at) 9: D ← D ∪ {(st, at, rt, st+1)} 10: end for 11: for each observation st in D do 12: µ, σ ← πθ(.|st) 13: z ∼ U(−α, α) 14: µ′ ← µ+ z 15: prob← N (µ′, σ) 16: logp← prob(at) 17: Compute RL loss Lπ using logp, at, and value function. 18: end for 19: end for
4 EXPERIMENTS
4.1 SETUP
Environments We conducted experiments on continuous control task from four reinforcement learning benchmarks: DeepMind Control (Tunyasuvunakool et al., 2020), OpenAI Gym (Brockman et al., 2016), PyBullet (Coumans & Bai, 2016–2021), and Nvidia IsaacGym (Makoviychuk et al., 2021). These benchmarks contain diverse environments with many different tasks, from low to high-dimensional environments (observations and actions space). Thus our evaluation contains a diverse set of tasks with various difficulties. The IsaacGym contains environments that run in GPU, thus enabling fast simulation, which eventually helps collect a large amount of simulation experience quickly, and faster RL training with GPU enables deep reinforcement learning models.
Baselines We compare our method RPO with the PPO (Schulman et al., 2017) algorithm. Here our method RPO uses the perturbed Gaussian distribution to represent the action output from the policy network, as described in Section 3. In contrast, the PPO uses standard Gaussian distribution to represent its action output. Further, we observe that the data augmentation method can help increase the policy’s entropy by often randomly perturbing observations. This process might improve the performance where higher entropy is preferred. Thus, we compare our method with two data augmentation-based methods: RAD (Laskin et al., 2020), and DRAC (Raileanu et al., 2020). Here, The pure data augmentation baseline RAD uses data processing before passing it to the agent, and the DRAC uses data augmentation to regularize the value and policy network. Both of these data augmentation methods use PPO as their base RL policy. Another common approach to increase entropy is to use the Entropy Regularization in the RL objective. A coefficient determines how much weight the policy would give to the entropy. We observe that various weighting might result in different levels of entropy increment. We use the entropy coefficient 0.0, 0.01, 0.05, 0.5, 1.0, and 10.0 and compare their performance in entropy and, in return, with our algorithm’s RPO. Note that our method does not introduce any additional hyper-parameters and does not use the entropy coefficient hyper-parameters. The implementation details of our algorithm and baselines are in the Appendix. To account for the stochasticity in the environment and policy, we run each experiment several times and report the mean and standard deviation. Unless otherwise specified, we run each experiment with 10 random seeds.
4.2 RESULTS
We compare the return performance of our method with baselines and show their entropy to contrast the results with the entropy of the policy.
Comparison with PPO This evaluation is the direct form of comparison where no method uses any aid (such as data augmentation or entropy regularization) in the entropy value. Figure 2 shows results comparison on DeepMind Control Environments. Results on more environments are in Appendix Figure 11. The entropy comparison is given in the Appendix Figure 12. In most scenarios, our method RPO shows consistent performance improvement in these environments compared to the PPO. In some environments, such as humanoid stand, humanoid run, and hopper hop, the PPO agent fails to learn any useful behavior and thus results in low episodic return. In contrast, our method RPO shows better performance and achieves a much higher episodic return. The RPO also shows a better mean return than the PPO in other environments. In many settings, such as in quadruped (walk, run, and escape), walker (stand, walk, run), fish swim, acrobot swingup, the PPO agents stop improving the performance after around 2M timestep. In contrast, our agent RPO shows consistent improvement over time of the training. This performance gain might be due to the proper management of policy entropy, as in our setup, the agent is encouraged to keep exploring as the training progress. On the other hand, the PPO agent might settle in a sub-optimal performance as the policy entropy, in this case, decreases as the agent trains for more timesteps. These results show the effectiveness of our method in diverse control tasks with varying complexity.
Results of OpenAI Gym environments: Pendulum and BipedalWalker are in Figure 3. Overall, our method, RPO, performs better compared to the PPO. In Pendulum environments, the PPO agent fails to learn any useful behavior in this setup. In contrast, RPO consistently learns with the increase in timestep and eventually learns the task. We see the policy entropy of RPO increases initially and eventually remains at a certain threshold, which might help the policy to stay exploratory and collect more data. In contrast, the PPO policy entropy decreases over time, and thus eventually, the performance remains the same, and the policy stops learning. This scenario might contribute to the bad performance of the PPO.
In the BipedalWalker environment, we see that both PPO and RPO learn up to a certain reward quickly. However, as we keep training both policies, we observe that after a certain period, the PPO’s performance drops and even starts to become worse. In contrast, the RPO stays robust as we train for more and eventually keep improving the performance. These results show the robustness of our method when ample train time is available. The entropy plot shows a similar pattern, as PPO decreases entropy over time, and RPO keeps the entropy at a certain threshold.
Figure 4 shows results comparison on PyBullet environments: Ant, and Minituar. We observe that our method RPO performs better than the PPO in the Ant environment. In the Minitaur environment, PPO quickly (at around 2M) learns up to a certain reward and remains on the same performance as time progresses. In contrast, RPO starts from a lower performance, eventually surpassing the PPO’s performance as time progresses. These results show the robustness of consistently improving the
policy of the RPO method. The entropy pattern remains the same in both cases; PPO reduces entropy while RPO keeps the entropy at a certain threshold which it learns automatically in an environment.
Figure 5 shows results comparison on IsaacGym environments: Cartpole, and BallBalance. In this setup, we run the simulation up to 100M timesteps which take around 30 minutes for each run in each environment in a Quadro RTX 4000 GPU. We see that for Cartpole, both PPO and RPO learn the reward quickly, around 450. However, as we kept training for a long, the performance of PPO started to degrade over time, and the policy entropy kept decreasing. On the other hand, our RPO agent keeps improving the performance; notably, the performance never degrades over time. The policy entropy shows that the entropy remains at a threshold. This exploratory nature of RPO’s policy might help keep learning and get better rewards. These results show the robustness of our method RPO over PPO even when an abundance of simulation is available. Interestingly, more simulation data might not always be good for RL agents. In our setup, the PPO even suffers from further training in the Cartpole environment. In the BallBalance environment (results are averaged over 3 random seed runs), our method RPO achieves a slight performance improvement over PPO. Overall, our method RPO performs better than the PPO in the two IsaacGym environments.
Comparison with Data Augmentation. In results on all DeepMind Control environments are shown in Table 1. Our method performs better in mean episodic return in most environments than PPO and other data augmentation baselines RAD and DRAC.
The return and policy entropy curve are shown in Appendix, Figure 10. We observe that the data augmentation slightly improved the base PPO algorithms, and the policy entropy shows higher than the base PPO. However, our method RPO still maintains a better mean return than all the baselines. The entropy of our method shows an increase at the initial timestep of the training. However, it eventually becomes stable at a particular value. The data augmentation method, especially RAD, shows an increase in entropy throughout the training process. However, this increase does not translate to the return performance. Moreover, data augmentation assumes some prior knowledge about the environment observation, and selecting a suitable data augmentation method requires domain knowledge. Improper handling of the data augmentation may result in worsen performance.
In addition, the augmentation method poses an additional processing time which can contribute to longer training time. In contrast, our method RPO does not require such domain knowledge and is thus readily applicable to any RL environment. Moreover, in return performance, our method shows better performance compared to the base PPO and these data augmentation methods.
Comparison with Entropy Regularization
Due to the variety in the environments, the entropy requirement might be different. Thus, improper setting of the entropy coefficient might result in bad training performance. In the tested environments, we observe e that sometimes the performance improves with a proper setup of tuned coefficient value and often worsens the performance (in Figure 6). However, our method RPO consistently performs better than PPO and the different entropy coefficient baselines. Importantly, our method does not use these coefficient hyper-parameters and still consistently performs better in various environments, which shows our method’s robustness in various environment variability.
We observe that in some environments, the coefficient 0.01 improves the performance of standard PPO with coefficient 0.0 while increasing the entropy. However, an increase in entropy to 0.05 and
above results in an unbounded entropy increase. Thus, the performance worsens in most scenarios where the agent fails to achieve a reasonable return. Results with all the coefficient variants in the Appendix Figure 9. Overall, our method RPO achieves better performance in the evaluated environments in our setup. Moreover, our method does not use the entropy coefficient hyperparameter and controls the entropy level automatically in each environment.
Ablation Study We conducted experiments on the α value ranges in the Uniform distribution. Figure 7 shows the return and policy comparison. We observe that the value of α affects the policy entropy and, thus, return performance. A smaller value of α (e.g., 0.001) seems to behave similarly to PPO, where policy entropy decreases over time, thus hampered performance. Higher entropy values, such as 1000.0, make the policy somewhat random as the uniform distribution dominates over the Gaussian distribution (as in Algorithm 1. This scenario keeps the entropy somewhat at a constant level; thus, the performance is hampered. Overall, a value between 0.1 to 3 often results in better performance. Due to overall performance advantage, in this paper, we report results with α = 0.5 for all 18 environments. Learning curve for more environments are in Appendix Figure 13.
5 RELATED WORK
Since the inception of the practical policy optimization method PPO (TRPO and Natural Policy Gradient), several studies have investigated different algorithmic components (Engstrom et al., 2019; Andrychowicz et al., 2020; Ahmed et al., 2019; Ilyas et al., 2019). Entropy regularization enjoys faster convergence and improves policy optimization (Williams & Peng, 1991; Mei et al., 2020; Mnih et al., 2016; Ahmed et al., 2019). However, some empirical findings observe the difficulty in setting proper entropy setup and observe no performance gain in many environments (Andrychowicz et al., 2020). Gaussian distribution is commonly used to represent continuous action (Schulman et al., 2015; 2017). Thus many follow-ups to implementation also use this Gaussian distribution as s default setup (Huang et al., 2022; 2021). In contrast, in this paper, we propose perturbed Gaussian distribution to represent the continuous action to maintain policy entropy. Furthermore, in this paper, we showed the empirical advantage of our method in several benchmark environments compared to the standard Gaussian-based method. Due to space limitations, we moved some related work discussions to the Appendix.
6 CONCLUSION
This paper investigates the effect of entropy in policy optimization-based reinforcement learning. We design a method that, when applied to a policy network, can keep the entropy of the policy to a certain threshold through training. Our approach uses a perturbed Gaussian distribution by a random number sample from a uniform distribution to represent the policy. We observe that keeping a certain entropy threshold throughout using our method can help better policy learning. Our proposed algorithm Robust Policy Optimization (RPO), performs better than the standard PPO on 18 continuous control tasks from four RL benchmarks, DeepMind Control, OpenAI Gym, Pybullet, and IsaacGym. Further, our method performs better in many settings than other approaches, encouraging higher entropy, such as data augmentation and entropy regularization. Overall, we show our method’s effectiveness and robustness where standard PPO or other baseline methods fail to maintain or even worsen the performance over time.
A APPENDIX
A.1 ADDITIONAL RELATED WORK
Another form of approach which adds entropy to the policy is data augmentation. We observe that the policy entropy often increases when the data is augmented often by some form of random processes. This improvement in entropy can eventually lead to better exploration in some environments. However, the type of data augmentation might be environment specific and thus requires domain knowledge as to what kind of data augmentation can be applied. In contrast, to this method, our method maintains an entropy threshold during entire policy learning. Moreover, we evaluated our algorithm with two kinds of data augmentation-based methods RAD (Laskin et al., 2020) and DRAC (Raileanu et al., 2020). Finally, we evaluate with an effective data augmentation approach random amplitude modulation, which was found to be worked well in vector-based states set up.
Many other forms of improvement have been investigated in policy optimization, which results in better empirical success, such as Generalized Advantage Estimation (Schulman et al., 2016), Normalization of Advantages (Andrychowicz et al., 2020), and clipped policy and value objective (Schulman et al., 2017; Engstrom et al., 2019; Andrychowicz et al., 2020). These improvements eventually led to overall improvement and are now used in many standard implementations such as (Huang et al., 2022; 2021). In our implementation, we leverage these implementation tricks when possible. Thus our method works on top of these essential improvements. Furthermore, our method is complementary to these approaches and thus can be used along with this method. In this paper, our implementation of baselines also used these implementation tricks, which is for a fair comparison.
A.2 EXPERIMENTS
Implementation Details: Our algorithm and the baselines are based on the PPO (Schulman et al., 2017) implementation available in (Huang et al., 2022; 2021). This implementation incorporated many important advancements from existing literature in recent years on policy gradient (e.g., Orthogonal Initialization, GAE, Entropy Regularization). We refer reader to (Huang et al., 2022) for further references. The pure data augmentation baseline RAD (Laskin et al., 2020) uses data processing before passing it to the agent, and the DRAC (Raileanu et al., 2020) uses data augmentation to regularize the loss of value and policy network. We experimented with vector-based states and used random amplitude scaling proposed in RAD (Laskin et al., 2020) as a data augmentation method for RAD and DRAC. In the random amplitude scaling, the state values are multiplied with random values generated uniformly between a range α to β. We used the suggested (Laskin et al., 2020) and better performing range α = 0.6 to β = 1.2 for all the experiments. Moreover, both RAD and DRAC use PPO as their base algorithm. However, our RPO method does not use any form of data augmentation.
We use the hyperparameters reported in the PPO implementation of continuous action spaces (Huang et al., 2022; 2021), which incorporate best practices in the continuous control task. Furthermore, to mitigate the effect of hyperparameters choice, we keep them the same for all the environments. Further, we keep the same hyperparameters for all agents for a fair comparison. The common hyperparameters can be found in Table 2.
Entropy Regularization The results comparison of RPO with entropy coefficient are in Figure 8
Data augmentation return curve and entropy: Figure 10 shows data augmentation return curve and entropy comparison with RPO.
Results on DeepMind Control is in Figure 11.
Entropy Plot for DeepMind Control is in Figure 12.
Ablation Study Ablation on α values of RPO are shown in Figure 13. | 1. What is the focus and contribution of the paper regarding robust policy optimization?
2. What are the strengths of the proposed approach, particularly in terms of improving exploration and performance?
3. What are the weaknesses of the paper, especially regarding its empirical nature and lack of theoretical justifications?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any concerns or suggestions regarding the comparisons with other RL methods that perturb the policy for improved robustness and performance during training? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
In this work the authors proposed a new robust policy optimization method in which the main approach is to randomly perturb the mean of the Gaussian policy to improve exploration. The main motivation is that stochastic policies that have high entropy tend to produce better RL performance, and so the authors propose to constantly perturb this policy during training. Empirically they showed that this new policy outperforms standard RL methods including the ones with data augmentation schemes (e.g., PPO, RAD, DRAC).
Strengths And Weaknesses
Strengths: Paper is generally clear in explaining the ideas of how a perturbed Gaussian policy policy can maintain policy entropy throughout training and help with RL performance. The paper includes pseudo-code and detailed hyper-parameters for readers to re-implement their work. This work should be reproducible. The authors evaluated the RPO method on quite a number of domains and in some cases the proposed method is more powerful than SOTA.
Weaknesses: The justification of this method is mainly empirical, there are not theoretical justifications of this work in terms of why the proposed method would work. If the main contribution is on increasing entropy for better exploration, then there is not enough baseline comparisons with other RL methods that perturb the policy to improve robustness as well as performance during training. For example, one such algorithm is NoisyNet, which also perturbs Gaussian policy network to improve robustness.
Clarity, Quality, Novelty And Reproducibility
The paper is generally clearly written with most concepts well explained, though some parts of this paper will be benefited by more explanations (especially the motivations/intuitions of the main algorithm). This work also detailed the experimental setup and algorithm settings (hyper-parameters). It also includes links to pseudo-code for users to reproduce their results.
However, my main concern about this work is its novelty. Perturbing a policy to achieve better exploration (entropy) is not a new idea and more comparisons and theoretical analysis are needed to back the authors' claims. |
ICLR | Title
Robust Policy Optimization in Deep Reinforcement Learning
Abstract
Entropy can play an essential role in policy optimization by selecting the stochastic policy, which eventually helps better explore the environment in reinforcement learning (RL). A proper balance between exploration and exploitation is challenging and might depend on the particular RL task. However, the stochasticity often reduces as the training progresses; thus, the policy becomes less exploratory. Therefore, in many cases, the policy can converge to sub-optimal due to a lack of representative data during training. Moreover, this issue can even be severe in high-dimensional environments. This paper investigates whether keeping a certain entropy threshold throughout training can help better policy learning. In particular, we propose an algorithm Robust Policy Optimization (RPO), which leverages a perturbed Gaussian distribution to encourage high-entropy actions. We evaluated our methods on various continuous control tasks from DeepMind Control, OpenAI Gym, Pybullet, and IsaacGym. We observed that in many settings, RPO increases the policy entropy early in training and then maintains a certain level of entropy throughout the training period. Eventually, our agent RPO shows consistently improved performance compared to PPO and other techniques such as data augmentation and entropy regularization. Furthermore, in several settings, our method stays robust in performance, while other baseline mechanisms fail to improve and even worsen the performance.
1 INTRODUCTION
Exploration in a high-dimensional environment is challenging due to the online nature of the task. In a reinforcement learning (RL) setup, the agent is responsible for collecting high-quality data. The agent has to decide on taking action which maximizes future return. In deep reinforcement learning, the policy and value functions are often represented as neural networks due to their flexibility in representing complex functions with continuous action space. If explored well, the learned policy will more likely lead to better data collection and, thus, better policy. However, in high-dimensional observation space, the possible trajectories are larger; thus, having representative data is challenging. Moreover, it has been observed that deep RL exhibit the primacy bias, where the agent has the tendency to rely heavily on the earlier interaction and might ignore helpful interaction at the later part of the training (Nikishin et al., 2022).
Maintaining stochasticity in policy is considered beneficial, as it can encourage exploration (Mnih et al., 2016; Ahmed et al., 2019). Entropy is the randomness of the actions, which is expected to go down as the training progress, and thus the policy becomes less stochastic. However, lack of stochasticity might hamper the exploration, especially in the large dimensional environment (high state and action spaces), as the policy can prematurely converge to a suboptimal policy. This scenario might result in low-quality data for agent training. In this paper, we are interested in observing the effect when we maintain a certain level of entropy throughout the training and thus encourage exploration.
We focus on a policy gradient-based approach with continuous action spaces. A common practice (Schulman et al., 2017; 2015) is to represent continuous action as the Gaussian distribution and learn the parameters (µ, and σ) conditioned on the state. The policy can be represented as a neural network, and it takes the state as input and outputs the one Gaussian parameters per action dimension. Then the final action is chosen as a sample from this distribution. This process inherently introduces
randomness in action as every time it samples action for the same state, the action value might differ. Though, in expectation, the action value is the same as the mean of the distribution, this process introduces some randomness in the learning process. However, as time progresses, the randomness might reduce, and the policy becomes less stochastic.
We first notice that when this method is used to train a PPO-based agent, the entropy of the policy starts to decline as the training progresses, as in Figure 1, even when the return performance is not good. Then we pose the question; what if the agent keeps the policy stochastic or entropy throughout the training? The goal is to enable the agent to keep exploring even when it achieves a certain level of learning. This process might help, especially in high-dimensional environments where the state and action spaces often remain unexplored. We developed an algorithm called Robust Policy Optimization (RPO), which maintains stochasticity throughout the training. We notice a consistent improvement in the performance of our method in many continuous control environments compared to standard PPO.
Seeing the data augmentation through the lens of entropy, we observe that empirically, it can help the policy achieve a higher entropy than without data augmentation. However, this process often requires prior knowledge about the environments and a preprocessing step of the agent experience. Moreover, such methods might result in an uncontrolled increase in action entropy, eventually hampering the return performance (Raileanu et al., 2020; Rahman & Xue, 2022). Another way to control entropy is to use an entropy regularizer (Mnih et al., 2016; Ahmed et al., 2019), which often shows beneficial effects. However, it has been observed that increasing entropy in such a way has little effect in specific environments (Andrychowicz et al., 2020). These results show difficulty in setting proper entropy throughout the agent’s training.
To this end, in this paper, we propose a mechanism for maintaining entropy throughout the training. We propose to use a new distribution to represent continuous action instead of standard Gaussian. The policy network output is still the Gaussian distribution’s mean and standard deviation in our setup. However, we add a random perturbation on the mean before taking an action sample. In particular, we add a random value z ∼ U(−α, α) to the mean µ to get a perturbed mean µ′ = µ+ z. Finally, the action is taken from the perturbed Gaussian distribution a ∼ N(µ′, σ). The resulting distribution is shown in Figure 1. We see that the resulting distribution becomes flatter than the standard Gaussian. Thus the sample spread more around the center of the mean than standard Gaussian, whose samples are more concentrated toward means. The uniform random number does not depend on states and policy parameters and thus can help the agent to maintain a certain level of stochasticity throughout the training process. We name our approach Robust Policy Optimization (RPO) and compare it with the standard PPO algorithm and other entropy-controlled methods such as data augmentation and entropy regularization.
We evaluated our method in several environments from DeepMind Control (Tunyasuvunakool et al., 2020), OpenAI Gym (Brockman et al., 2016), PyBullet (Coumans & Bai, 2016–2021), and Nvidia IsaacGym (Makoviychuk et al., 2021). We observed that our method RPO performs consistently better than PPO, and the performance improvement shows larger in high-dimensional environments. Moreover, RPO outperform two data augmentation-based method: RAD (Laskin et al., 2020) and DRAC (Raileanu et al., 2020). In addition to that, our method is simple and free from the type of data augmentation assumption and data preprocessing; still, our method achieves better empirical performance than the data augmentation-based methods in many environments.
Further, we tested our method against the entropy regularization method, where the entropy of a policy is controlled by a coefficient weight of the policy entropy. Empirically, we observe that our method RPO performs better in most environments, and some choice of coefficient eventually leads to worse performance. Moreover, we observe that even when the agent is trained for a large simulation experience, the performance might not stay consistent. In particular, in IsaacGym environments, the agent has access to a large sample due to the high-performing simulation on GPU. This abundance of data may not necessarily improve or maintain the performance. In the classic CartPole environment, we observe that PPO agents quickly achieve a certain performance; however, it then fails to maintain it, and the performance quickly drops as we train for more. Similar results have been found in the OpenAI BipedalWalker environments. On the other hand, our method RPO shows robustness to more data and keeps improving or maintaining a similar performance as we keep training agents for more data.
In summary, we make the following contributions:
• We investigate whether keeping the policy stochastic throughout the training is beneficial for the RL training.
• We propose an algorithm, Robust Policy Optimization (RPO), that uses a perturbed Gaussian distribution to represent actions and is empirically shown to maintain a certain level of policy entropy throughout the training.
• We evaluate our method on 18 tasks from four RL benchmarks. Evaluation results show that our method RPO consistently performs better than standard PPO, two data argumentation-based methods RAD and DRAC, and entropy regularization.
2 PRELIMINARIES AND PROBLEM SETTINGS
Markov Decision Process (MDP). An MDP is defined as a tuple M = (S,A,P,R). Here an agent interacts with the environment and take an action at ∈ A at a discrete timestep t when at state st ∈ S. After that the environment moves to next state st+1 ∈ S based on the dynamic transition probabilities P(st+1|st, at) and the agent recieves a reward rt according to the reward functionR. Reinforcement Learning In a reinforcement learning framework, the agent interacts with the MDP, and the agent’s goal is to learn a policy π ∈ Π that maximizes cumulative reward, where Π is the set of all possible policies. Given a state, the agent prescribes an action according to the policy π, and an optimal policy π∗ ∈ Π has the highest cumulative rewards. Policy Gradient. The policy gradient method is a class of reinforcement learning algorithms where the objective is formulated to optimize the cumulative future return directly. In a deep reinforcement learning setup, the policy can be represented as a neural network that takes the state as input and output actions. The objective of this policy network is to optimize for cumulative rewards. In this paper, we focus on a particular policy gradient method, Proximal Policy Optimization (PPO) (Schulman et al., 2017), which is similar to Natural Policy Gradient (Kakade, 2001) and Trust Region Policy Optimization (TRPO) (Schulman et al., 2015). The following is the objective of the PPO:
Lπ = −Et[ πθ(at|st) πθold(at|st) A(st, at)] (1)
, where πθ(at|st) is the probability of choosing action at given state st at timestep t using the current policy parameterized by θ. On the other hand, the πθold(at|st) refer to the probabilities of using an old policy parameterized by previous parameter values θold. The advantage A(st, at) is an estimation which is the advantage of taking action at at st compared to average return from that state.
In PPO, a surrogate objective is optimized by applying clipping on the current and old policy ratio on equation 1. In the case of the discrete action, the output is often used as the categorical distribution, where the dimension is the number of actions. On the other hand, in continuous action cases, the output can correspond to parameters of Gaussian distribution, where each action dimension is represented by mean and variance. Finally, the action is taken as the sample from this Gaussian distribution. In this paper, we mainly focus on the continuous action case and assume the Gaussian distribution for action.
Entropy Measure Entropy is a measure of uncertainty in the random variable. In the context of our discussion, we measure the entropy of a policy from the action distribution it produces given a state. The more entropic an action distribution is, the more stochastic the decision would be as the final action is eventually a sample from the output action distribution. This measure can be seen as the policy’s stochasticity, and higher entropy distribution allows agents to explore the state space more. Thus, entropy can play a role in controlling the amount of exploration the agent performs during learning. In this paper, we focus on the entropy measure and refer to the stochasticity of policy by this measure.
3 ROBUST POLICY OPTIMIZATION (RPO)
Exploration at the start of training can help an agent to collect diverse and representative data. However, as time progresses, the agent might start to reduce the exploration and try to be more certain about a state. In standard PPO, we observe that the agent shows more randomness in action at the beginning (see Figure 1), and it becomes smaller as the training progresses. We measure this randomness in the form of entropy, which indicates the randomness of the actions taken. In the continuous case, a practical approach is to represent the action as a parameter of Gaussian distribution. The entropy of this distribution represents the amount of randomness in action, which also indicates how much the policy can explore new states.
However, in Gaussian distribution, the samples are concentrated toward the mean; thus, it can limit the exploration in some RL environments. This paper presents an alternative to this Gaussian distribution and proposes a new distribution that accounts for higher entropy than the standard Gaussian. The goal is to keep the entropy higher or at some levels throughout the agent training. In particular, we propose to combine Gaussian Distribution and Uniform distribution. At each timestep of the algorithm training, we perturb the mean µ of Gaussian N (µ, σ) by adding a random number drawn from the Uniform U(−α, α) and get the perturbed mean µ′. Then the sample for action is taken from the new distributionN (µ′, σ). In this setup, the Gaussian parameters still depend on the state, similar to the standard setup. However, the Uniform distribution does not depend on the states. Thus as training progresses, the standard Gaussian distribution can be less entropic as the policy optimizes to be more confident in particular states. However, the state-independent perturbation using the Uniform distribution still enables the policy to maintain stochasticity.
Figure 1 shows the diagram of the resulting Gaussian distributions. This diagram is generated by first sampling data from a Gaussian distribution with mean µ = 0.0 and standard deviation σ = 1.0 and then corresponding perturbed mean µ′ = µ + z and standard deviation σ = 1.0, where z ∼ U(−3.0, 3.0). We see that the values are less centered around the mean for our Perturb Gaussian than the standard. Therefore, samples from our proposed distribution should have more entropy than the standard Gaussian.
Algorithms 1 shows the details procedure of our RPO method. The agent first collects experience trajectory data using the current policy and stores it in a buffer D (lines 2 to 10). Next, the agent uses the standard Gaussian to sample the action in these interaction steps (line 6). Finally, these experience data are used to update the policy, optimizing for policy parameter θ. In this case, given a state st, the policy network outputs µ and σ for each action. Then the µ is perturbed by adding a value z, which samples from a Uniform distribution U(−α, α) (lines 13 and 14 in blue). After that, the probability is computed from the new distributionN (µ′, σ), which is eventually used to calculate action log probabilities that are used for computing loss Lπ . In the case of the α hyperparameter, we report results using α = 0.5 unless otherwise specified.
Algorithm 1 Robust Policy Optimization (RPO) 1: Initialize parameter vectors θ for policy network. 2: for each iteration do 3: D ← {} 4: for each environment step do 5: µ, σ ← πθ(.|st) 6: at ∼ N (µ, σ) 7: st+1 ∼ P (st+1|st, at) 8: rt ∼ R(st, at) 9: D ← D ∪ {(st, at, rt, st+1)} 10: end for 11: for each observation st in D do 12: µ, σ ← πθ(.|st) 13: z ∼ U(−α, α) 14: µ′ ← µ+ z 15: prob← N (µ′, σ) 16: logp← prob(at) 17: Compute RL loss Lπ using logp, at, and value function. 18: end for 19: end for
4 EXPERIMENTS
4.1 SETUP
Environments We conducted experiments on continuous control task from four reinforcement learning benchmarks: DeepMind Control (Tunyasuvunakool et al., 2020), OpenAI Gym (Brockman et al., 2016), PyBullet (Coumans & Bai, 2016–2021), and Nvidia IsaacGym (Makoviychuk et al., 2021). These benchmarks contain diverse environments with many different tasks, from low to high-dimensional environments (observations and actions space). Thus our evaluation contains a diverse set of tasks with various difficulties. The IsaacGym contains environments that run in GPU, thus enabling fast simulation, which eventually helps collect a large amount of simulation experience quickly, and faster RL training with GPU enables deep reinforcement learning models.
Baselines We compare our method RPO with the PPO (Schulman et al., 2017) algorithm. Here our method RPO uses the perturbed Gaussian distribution to represent the action output from the policy network, as described in Section 3. In contrast, the PPO uses standard Gaussian distribution to represent its action output. Further, we observe that the data augmentation method can help increase the policy’s entropy by often randomly perturbing observations. This process might improve the performance where higher entropy is preferred. Thus, we compare our method with two data augmentation-based methods: RAD (Laskin et al., 2020), and DRAC (Raileanu et al., 2020). Here, The pure data augmentation baseline RAD uses data processing before passing it to the agent, and the DRAC uses data augmentation to regularize the value and policy network. Both of these data augmentation methods use PPO as their base RL policy. Another common approach to increase entropy is to use the Entropy Regularization in the RL objective. A coefficient determines how much weight the policy would give to the entropy. We observe that various weighting might result in different levels of entropy increment. We use the entropy coefficient 0.0, 0.01, 0.05, 0.5, 1.0, and 10.0 and compare their performance in entropy and, in return, with our algorithm’s RPO. Note that our method does not introduce any additional hyper-parameters and does not use the entropy coefficient hyper-parameters. The implementation details of our algorithm and baselines are in the Appendix. To account for the stochasticity in the environment and policy, we run each experiment several times and report the mean and standard deviation. Unless otherwise specified, we run each experiment with 10 random seeds.
4.2 RESULTS
We compare the return performance of our method with baselines and show their entropy to contrast the results with the entropy of the policy.
Comparison with PPO This evaluation is the direct form of comparison where no method uses any aid (such as data augmentation or entropy regularization) in the entropy value. Figure 2 shows results comparison on DeepMind Control Environments. Results on more environments are in Appendix Figure 11. The entropy comparison is given in the Appendix Figure 12. In most scenarios, our method RPO shows consistent performance improvement in these environments compared to the PPO. In some environments, such as humanoid stand, humanoid run, and hopper hop, the PPO agent fails to learn any useful behavior and thus results in low episodic return. In contrast, our method RPO shows better performance and achieves a much higher episodic return. The RPO also shows a better mean return than the PPO in other environments. In many settings, such as in quadruped (walk, run, and escape), walker (stand, walk, run), fish swim, acrobot swingup, the PPO agents stop improving the performance after around 2M timestep. In contrast, our agent RPO shows consistent improvement over time of the training. This performance gain might be due to the proper management of policy entropy, as in our setup, the agent is encouraged to keep exploring as the training progress. On the other hand, the PPO agent might settle in a sub-optimal performance as the policy entropy, in this case, decreases as the agent trains for more timesteps. These results show the effectiveness of our method in diverse control tasks with varying complexity.
Results of OpenAI Gym environments: Pendulum and BipedalWalker are in Figure 3. Overall, our method, RPO, performs better compared to the PPO. In Pendulum environments, the PPO agent fails to learn any useful behavior in this setup. In contrast, RPO consistently learns with the increase in timestep and eventually learns the task. We see the policy entropy of RPO increases initially and eventually remains at a certain threshold, which might help the policy to stay exploratory and collect more data. In contrast, the PPO policy entropy decreases over time, and thus eventually, the performance remains the same, and the policy stops learning. This scenario might contribute to the bad performance of the PPO.
In the BipedalWalker environment, we see that both PPO and RPO learn up to a certain reward quickly. However, as we keep training both policies, we observe that after a certain period, the PPO’s performance drops and even starts to become worse. In contrast, the RPO stays robust as we train for more and eventually keep improving the performance. These results show the robustness of our method when ample train time is available. The entropy plot shows a similar pattern, as PPO decreases entropy over time, and RPO keeps the entropy at a certain threshold.
Figure 4 shows results comparison on PyBullet environments: Ant, and Minituar. We observe that our method RPO performs better than the PPO in the Ant environment. In the Minitaur environment, PPO quickly (at around 2M) learns up to a certain reward and remains on the same performance as time progresses. In contrast, RPO starts from a lower performance, eventually surpassing the PPO’s performance as time progresses. These results show the robustness of consistently improving the
policy of the RPO method. The entropy pattern remains the same in both cases; PPO reduces entropy while RPO keeps the entropy at a certain threshold which it learns automatically in an environment.
Figure 5 shows results comparison on IsaacGym environments: Cartpole, and BallBalance. In this setup, we run the simulation up to 100M timesteps which take around 30 minutes for each run in each environment in a Quadro RTX 4000 GPU. We see that for Cartpole, both PPO and RPO learn the reward quickly, around 450. However, as we kept training for a long, the performance of PPO started to degrade over time, and the policy entropy kept decreasing. On the other hand, our RPO agent keeps improving the performance; notably, the performance never degrades over time. The policy entropy shows that the entropy remains at a threshold. This exploratory nature of RPO’s policy might help keep learning and get better rewards. These results show the robustness of our method RPO over PPO even when an abundance of simulation is available. Interestingly, more simulation data might not always be good for RL agents. In our setup, the PPO even suffers from further training in the Cartpole environment. In the BallBalance environment (results are averaged over 3 random seed runs), our method RPO achieves a slight performance improvement over PPO. Overall, our method RPO performs better than the PPO in the two IsaacGym environments.
Comparison with Data Augmentation. In results on all DeepMind Control environments are shown in Table 1. Our method performs better in mean episodic return in most environments than PPO and other data augmentation baselines RAD and DRAC.
The return and policy entropy curve are shown in Appendix, Figure 10. We observe that the data augmentation slightly improved the base PPO algorithms, and the policy entropy shows higher than the base PPO. However, our method RPO still maintains a better mean return than all the baselines. The entropy of our method shows an increase at the initial timestep of the training. However, it eventually becomes stable at a particular value. The data augmentation method, especially RAD, shows an increase in entropy throughout the training process. However, this increase does not translate to the return performance. Moreover, data augmentation assumes some prior knowledge about the environment observation, and selecting a suitable data augmentation method requires domain knowledge. Improper handling of the data augmentation may result in worsen performance.
In addition, the augmentation method poses an additional processing time which can contribute to longer training time. In contrast, our method RPO does not require such domain knowledge and is thus readily applicable to any RL environment. Moreover, in return performance, our method shows better performance compared to the base PPO and these data augmentation methods.
Comparison with Entropy Regularization
Due to the variety in the environments, the entropy requirement might be different. Thus, improper setting of the entropy coefficient might result in bad training performance. In the tested environments, we observe e that sometimes the performance improves with a proper setup of tuned coefficient value and often worsens the performance (in Figure 6). However, our method RPO consistently performs better than PPO and the different entropy coefficient baselines. Importantly, our method does not use these coefficient hyper-parameters and still consistently performs better in various environments, which shows our method’s robustness in various environment variability.
We observe that in some environments, the coefficient 0.01 improves the performance of standard PPO with coefficient 0.0 while increasing the entropy. However, an increase in entropy to 0.05 and
above results in an unbounded entropy increase. Thus, the performance worsens in most scenarios where the agent fails to achieve a reasonable return. Results with all the coefficient variants in the Appendix Figure 9. Overall, our method RPO achieves better performance in the evaluated environments in our setup. Moreover, our method does not use the entropy coefficient hyperparameter and controls the entropy level automatically in each environment.
Ablation Study We conducted experiments on the α value ranges in the Uniform distribution. Figure 7 shows the return and policy comparison. We observe that the value of α affects the policy entropy and, thus, return performance. A smaller value of α (e.g., 0.001) seems to behave similarly to PPO, where policy entropy decreases over time, thus hampered performance. Higher entropy values, such as 1000.0, make the policy somewhat random as the uniform distribution dominates over the Gaussian distribution (as in Algorithm 1. This scenario keeps the entropy somewhat at a constant level; thus, the performance is hampered. Overall, a value between 0.1 to 3 often results in better performance. Due to overall performance advantage, in this paper, we report results with α = 0.5 for all 18 environments. Learning curve for more environments are in Appendix Figure 13.
5 RELATED WORK
Since the inception of the practical policy optimization method PPO (TRPO and Natural Policy Gradient), several studies have investigated different algorithmic components (Engstrom et al., 2019; Andrychowicz et al., 2020; Ahmed et al., 2019; Ilyas et al., 2019). Entropy regularization enjoys faster convergence and improves policy optimization (Williams & Peng, 1991; Mei et al., 2020; Mnih et al., 2016; Ahmed et al., 2019). However, some empirical findings observe the difficulty in setting proper entropy setup and observe no performance gain in many environments (Andrychowicz et al., 2020). Gaussian distribution is commonly used to represent continuous action (Schulman et al., 2015; 2017). Thus many follow-ups to implementation also use this Gaussian distribution as s default setup (Huang et al., 2022; 2021). In contrast, in this paper, we propose perturbed Gaussian distribution to represent the continuous action to maintain policy entropy. Furthermore, in this paper, we showed the empirical advantage of our method in several benchmark environments compared to the standard Gaussian-based method. Due to space limitations, we moved some related work discussions to the Appendix.
6 CONCLUSION
This paper investigates the effect of entropy in policy optimization-based reinforcement learning. We design a method that, when applied to a policy network, can keep the entropy of the policy to a certain threshold through training. Our approach uses a perturbed Gaussian distribution by a random number sample from a uniform distribution to represent the policy. We observe that keeping a certain entropy threshold throughout using our method can help better policy learning. Our proposed algorithm Robust Policy Optimization (RPO), performs better than the standard PPO on 18 continuous control tasks from four RL benchmarks, DeepMind Control, OpenAI Gym, Pybullet, and IsaacGym. Further, our method performs better in many settings than other approaches, encouraging higher entropy, such as data augmentation and entropy regularization. Overall, we show our method’s effectiveness and robustness where standard PPO or other baseline methods fail to maintain or even worsen the performance over time.
A APPENDIX
A.1 ADDITIONAL RELATED WORK
Another form of approach which adds entropy to the policy is data augmentation. We observe that the policy entropy often increases when the data is augmented often by some form of random processes. This improvement in entropy can eventually lead to better exploration in some environments. However, the type of data augmentation might be environment specific and thus requires domain knowledge as to what kind of data augmentation can be applied. In contrast, to this method, our method maintains an entropy threshold during entire policy learning. Moreover, we evaluated our algorithm with two kinds of data augmentation-based methods RAD (Laskin et al., 2020) and DRAC (Raileanu et al., 2020). Finally, we evaluate with an effective data augmentation approach random amplitude modulation, which was found to be worked well in vector-based states set up.
Many other forms of improvement have been investigated in policy optimization, which results in better empirical success, such as Generalized Advantage Estimation (Schulman et al., 2016), Normalization of Advantages (Andrychowicz et al., 2020), and clipped policy and value objective (Schulman et al., 2017; Engstrom et al., 2019; Andrychowicz et al., 2020). These improvements eventually led to overall improvement and are now used in many standard implementations such as (Huang et al., 2022; 2021). In our implementation, we leverage these implementation tricks when possible. Thus our method works on top of these essential improvements. Furthermore, our method is complementary to these approaches and thus can be used along with this method. In this paper, our implementation of baselines also used these implementation tricks, which is for a fair comparison.
A.2 EXPERIMENTS
Implementation Details: Our algorithm and the baselines are based on the PPO (Schulman et al., 2017) implementation available in (Huang et al., 2022; 2021). This implementation incorporated many important advancements from existing literature in recent years on policy gradient (e.g., Orthogonal Initialization, GAE, Entropy Regularization). We refer reader to (Huang et al., 2022) for further references. The pure data augmentation baseline RAD (Laskin et al., 2020) uses data processing before passing it to the agent, and the DRAC (Raileanu et al., 2020) uses data augmentation to regularize the loss of value and policy network. We experimented with vector-based states and used random amplitude scaling proposed in RAD (Laskin et al., 2020) as a data augmentation method for RAD and DRAC. In the random amplitude scaling, the state values are multiplied with random values generated uniformly between a range α to β. We used the suggested (Laskin et al., 2020) and better performing range α = 0.6 to β = 1.2 for all the experiments. Moreover, both RAD and DRAC use PPO as their base algorithm. However, our RPO method does not use any form of data augmentation.
We use the hyperparameters reported in the PPO implementation of continuous action spaces (Huang et al., 2022; 2021), which incorporate best practices in the continuous control task. Furthermore, to mitigate the effect of hyperparameters choice, we keep them the same for all the environments. Further, we keep the same hyperparameters for all agents for a fair comparison. The common hyperparameters can be found in Table 2.
Entropy Regularization The results comparison of RPO with entropy coefficient are in Figure 8
Data augmentation return curve and entropy: Figure 10 shows data augmentation return curve and entropy comparison with RPO.
Results on DeepMind Control is in Figure 11.
Entropy Plot for DeepMind Control is in Figure 12.
Ablation Study Ablation on α values of RPO are shown in Figure 13. | 1. What is the focus and contribution of the paper on robust policy optimization?
2. What are the strengths of the proposed approach, particularly in terms of its simplicity and ease of implementation?
3. What are the weaknesses of the paper, especially regarding its lack of comparison with other exploration methods and theoretical justification?
4. Do you have any concerns about the choice of distribution and its impact on the PPO method's constraint function?
5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The authors propose a Robust Policy Optimization (RPO) method, a simple extension of the popular Proximal Policy Optimization (PPO) method, by adding a uniform random number to the action mean, sampled from a normal distribution. It is argued that std. normal distribution used to parameterize continuous action is not suitable if more exploration is expected from the agent. Authors modify the gaussian to have more support than the std. gaussian r.v.
Empirical results on std. RL benchmarks demonstrate that the proposed method has higher exploration throughout the training.
Strengths And Weaknesses
Some strengths of the paper:
The idea is easy to understand, and extremely easy to implement. Basically just adding a random number from uniform distribution would work.
The results show that the agent trained with the proposed method explores more than the baseline PPO method.
Some weaknesses of the paper:
The idea of exploration-exploitation is very old in the RL/bandit community.
Authors didn't compare the std. exploration methods in the literature.
There is no theoretical justification for choosing this particular distribution. How do other distributions with wider support work in place of gaussian?
The effect of changing the distribution to the constraint of the PPO method is also not discussed. How does the constraint function behave when the policy parameterization is changed?
Algorithm 1 is not written very well. what is "logp", "prob"? Please make it more clear.
Clarity, Quality, Novelty And Reproducibility
The paper is not clear at points (for ex: Algo. 1)
The novelty is limited in the paper.
The method seems easy to reproduce. |
ICLR | Title
Robust Policy Optimization in Deep Reinforcement Learning
Abstract
Entropy can play an essential role in policy optimization by selecting the stochastic policy, which eventually helps better explore the environment in reinforcement learning (RL). A proper balance between exploration and exploitation is challenging and might depend on the particular RL task. However, the stochasticity often reduces as the training progresses; thus, the policy becomes less exploratory. Therefore, in many cases, the policy can converge to sub-optimal due to a lack of representative data during training. Moreover, this issue can even be severe in high-dimensional environments. This paper investigates whether keeping a certain entropy threshold throughout training can help better policy learning. In particular, we propose an algorithm Robust Policy Optimization (RPO), which leverages a perturbed Gaussian distribution to encourage high-entropy actions. We evaluated our methods on various continuous control tasks from DeepMind Control, OpenAI Gym, Pybullet, and IsaacGym. We observed that in many settings, RPO increases the policy entropy early in training and then maintains a certain level of entropy throughout the training period. Eventually, our agent RPO shows consistently improved performance compared to PPO and other techniques such as data augmentation and entropy regularization. Furthermore, in several settings, our method stays robust in performance, while other baseline mechanisms fail to improve and even worsen the performance.
1 INTRODUCTION
Exploration in a high-dimensional environment is challenging due to the online nature of the task. In a reinforcement learning (RL) setup, the agent is responsible for collecting high-quality data. The agent has to decide on taking action which maximizes future return. In deep reinforcement learning, the policy and value functions are often represented as neural networks due to their flexibility in representing complex functions with continuous action space. If explored well, the learned policy will more likely lead to better data collection and, thus, better policy. However, in high-dimensional observation space, the possible trajectories are larger; thus, having representative data is challenging. Moreover, it has been observed that deep RL exhibit the primacy bias, where the agent has the tendency to rely heavily on the earlier interaction and might ignore helpful interaction at the later part of the training (Nikishin et al., 2022).
Maintaining stochasticity in policy is considered beneficial, as it can encourage exploration (Mnih et al., 2016; Ahmed et al., 2019). Entropy is the randomness of the actions, which is expected to go down as the training progress, and thus the policy becomes less stochastic. However, lack of stochasticity might hamper the exploration, especially in the large dimensional environment (high state and action spaces), as the policy can prematurely converge to a suboptimal policy. This scenario might result in low-quality data for agent training. In this paper, we are interested in observing the effect when we maintain a certain level of entropy throughout the training and thus encourage exploration.
We focus on a policy gradient-based approach with continuous action spaces. A common practice (Schulman et al., 2017; 2015) is to represent continuous action as the Gaussian distribution and learn the parameters (µ, and σ) conditioned on the state. The policy can be represented as a neural network, and it takes the state as input and outputs the one Gaussian parameters per action dimension. Then the final action is chosen as a sample from this distribution. This process inherently introduces
randomness in action as every time it samples action for the same state, the action value might differ. Though, in expectation, the action value is the same as the mean of the distribution, this process introduces some randomness in the learning process. However, as time progresses, the randomness might reduce, and the policy becomes less stochastic.
We first notice that when this method is used to train a PPO-based agent, the entropy of the policy starts to decline as the training progresses, as in Figure 1, even when the return performance is not good. Then we pose the question; what if the agent keeps the policy stochastic or entropy throughout the training? The goal is to enable the agent to keep exploring even when it achieves a certain level of learning. This process might help, especially in high-dimensional environments where the state and action spaces often remain unexplored. We developed an algorithm called Robust Policy Optimization (RPO), which maintains stochasticity throughout the training. We notice a consistent improvement in the performance of our method in many continuous control environments compared to standard PPO.
Seeing the data augmentation through the lens of entropy, we observe that empirically, it can help the policy achieve a higher entropy than without data augmentation. However, this process often requires prior knowledge about the environments and a preprocessing step of the agent experience. Moreover, such methods might result in an uncontrolled increase in action entropy, eventually hampering the return performance (Raileanu et al., 2020; Rahman & Xue, 2022). Another way to control entropy is to use an entropy regularizer (Mnih et al., 2016; Ahmed et al., 2019), which often shows beneficial effects. However, it has been observed that increasing entropy in such a way has little effect in specific environments (Andrychowicz et al., 2020). These results show difficulty in setting proper entropy throughout the agent’s training.
To this end, in this paper, we propose a mechanism for maintaining entropy throughout the training. We propose to use a new distribution to represent continuous action instead of standard Gaussian. The policy network output is still the Gaussian distribution’s mean and standard deviation in our setup. However, we add a random perturbation on the mean before taking an action sample. In particular, we add a random value z ∼ U(−α, α) to the mean µ to get a perturbed mean µ′ = µ+ z. Finally, the action is taken from the perturbed Gaussian distribution a ∼ N(µ′, σ). The resulting distribution is shown in Figure 1. We see that the resulting distribution becomes flatter than the standard Gaussian. Thus the sample spread more around the center of the mean than standard Gaussian, whose samples are more concentrated toward means. The uniform random number does not depend on states and policy parameters and thus can help the agent to maintain a certain level of stochasticity throughout the training process. We name our approach Robust Policy Optimization (RPO) and compare it with the standard PPO algorithm and other entropy-controlled methods such as data augmentation and entropy regularization.
We evaluated our method in several environments from DeepMind Control (Tunyasuvunakool et al., 2020), OpenAI Gym (Brockman et al., 2016), PyBullet (Coumans & Bai, 2016–2021), and Nvidia IsaacGym (Makoviychuk et al., 2021). We observed that our method RPO performs consistently better than PPO, and the performance improvement shows larger in high-dimensional environments. Moreover, RPO outperform two data augmentation-based method: RAD (Laskin et al., 2020) and DRAC (Raileanu et al., 2020). In addition to that, our method is simple and free from the type of data augmentation assumption and data preprocessing; still, our method achieves better empirical performance than the data augmentation-based methods in many environments.
Further, we tested our method against the entropy regularization method, where the entropy of a policy is controlled by a coefficient weight of the policy entropy. Empirically, we observe that our method RPO performs better in most environments, and some choice of coefficient eventually leads to worse performance. Moreover, we observe that even when the agent is trained for a large simulation experience, the performance might not stay consistent. In particular, in IsaacGym environments, the agent has access to a large sample due to the high-performing simulation on GPU. This abundance of data may not necessarily improve or maintain the performance. In the classic CartPole environment, we observe that PPO agents quickly achieve a certain performance; however, it then fails to maintain it, and the performance quickly drops as we train for more. Similar results have been found in the OpenAI BipedalWalker environments. On the other hand, our method RPO shows robustness to more data and keeps improving or maintaining a similar performance as we keep training agents for more data.
In summary, we make the following contributions:
• We investigate whether keeping the policy stochastic throughout the training is beneficial for the RL training.
• We propose an algorithm, Robust Policy Optimization (RPO), that uses a perturbed Gaussian distribution to represent actions and is empirically shown to maintain a certain level of policy entropy throughout the training.
• We evaluate our method on 18 tasks from four RL benchmarks. Evaluation results show that our method RPO consistently performs better than standard PPO, two data argumentation-based methods RAD and DRAC, and entropy regularization.
2 PRELIMINARIES AND PROBLEM SETTINGS
Markov Decision Process (MDP). An MDP is defined as a tuple M = (S,A,P,R). Here an agent interacts with the environment and take an action at ∈ A at a discrete timestep t when at state st ∈ S. After that the environment moves to next state st+1 ∈ S based on the dynamic transition probabilities P(st+1|st, at) and the agent recieves a reward rt according to the reward functionR. Reinforcement Learning In a reinforcement learning framework, the agent interacts with the MDP, and the agent’s goal is to learn a policy π ∈ Π that maximizes cumulative reward, where Π is the set of all possible policies. Given a state, the agent prescribes an action according to the policy π, and an optimal policy π∗ ∈ Π has the highest cumulative rewards. Policy Gradient. The policy gradient method is a class of reinforcement learning algorithms where the objective is formulated to optimize the cumulative future return directly. In a deep reinforcement learning setup, the policy can be represented as a neural network that takes the state as input and output actions. The objective of this policy network is to optimize for cumulative rewards. In this paper, we focus on a particular policy gradient method, Proximal Policy Optimization (PPO) (Schulman et al., 2017), which is similar to Natural Policy Gradient (Kakade, 2001) and Trust Region Policy Optimization (TRPO) (Schulman et al., 2015). The following is the objective of the PPO:
Lπ = −Et[ πθ(at|st) πθold(at|st) A(st, at)] (1)
, where πθ(at|st) is the probability of choosing action at given state st at timestep t using the current policy parameterized by θ. On the other hand, the πθold(at|st) refer to the probabilities of using an old policy parameterized by previous parameter values θold. The advantage A(st, at) is an estimation which is the advantage of taking action at at st compared to average return from that state.
In PPO, a surrogate objective is optimized by applying clipping on the current and old policy ratio on equation 1. In the case of the discrete action, the output is often used as the categorical distribution, where the dimension is the number of actions. On the other hand, in continuous action cases, the output can correspond to parameters of Gaussian distribution, where each action dimension is represented by mean and variance. Finally, the action is taken as the sample from this Gaussian distribution. In this paper, we mainly focus on the continuous action case and assume the Gaussian distribution for action.
Entropy Measure Entropy is a measure of uncertainty in the random variable. In the context of our discussion, we measure the entropy of a policy from the action distribution it produces given a state. The more entropic an action distribution is, the more stochastic the decision would be as the final action is eventually a sample from the output action distribution. This measure can be seen as the policy’s stochasticity, and higher entropy distribution allows agents to explore the state space more. Thus, entropy can play a role in controlling the amount of exploration the agent performs during learning. In this paper, we focus on the entropy measure and refer to the stochasticity of policy by this measure.
3 ROBUST POLICY OPTIMIZATION (RPO)
Exploration at the start of training can help an agent to collect diverse and representative data. However, as time progresses, the agent might start to reduce the exploration and try to be more certain about a state. In standard PPO, we observe that the agent shows more randomness in action at the beginning (see Figure 1), and it becomes smaller as the training progresses. We measure this randomness in the form of entropy, which indicates the randomness of the actions taken. In the continuous case, a practical approach is to represent the action as a parameter of Gaussian distribution. The entropy of this distribution represents the amount of randomness in action, which also indicates how much the policy can explore new states.
However, in Gaussian distribution, the samples are concentrated toward the mean; thus, it can limit the exploration in some RL environments. This paper presents an alternative to this Gaussian distribution and proposes a new distribution that accounts for higher entropy than the standard Gaussian. The goal is to keep the entropy higher or at some levels throughout the agent training. In particular, we propose to combine Gaussian Distribution and Uniform distribution. At each timestep of the algorithm training, we perturb the mean µ of Gaussian N (µ, σ) by adding a random number drawn from the Uniform U(−α, α) and get the perturbed mean µ′. Then the sample for action is taken from the new distributionN (µ′, σ). In this setup, the Gaussian parameters still depend on the state, similar to the standard setup. However, the Uniform distribution does not depend on the states. Thus as training progresses, the standard Gaussian distribution can be less entropic as the policy optimizes to be more confident in particular states. However, the state-independent perturbation using the Uniform distribution still enables the policy to maintain stochasticity.
Figure 1 shows the diagram of the resulting Gaussian distributions. This diagram is generated by first sampling data from a Gaussian distribution with mean µ = 0.0 and standard deviation σ = 1.0 and then corresponding perturbed mean µ′ = µ + z and standard deviation σ = 1.0, where z ∼ U(−3.0, 3.0). We see that the values are less centered around the mean for our Perturb Gaussian than the standard. Therefore, samples from our proposed distribution should have more entropy than the standard Gaussian.
Algorithms 1 shows the details procedure of our RPO method. The agent first collects experience trajectory data using the current policy and stores it in a buffer D (lines 2 to 10). Next, the agent uses the standard Gaussian to sample the action in these interaction steps (line 6). Finally, these experience data are used to update the policy, optimizing for policy parameter θ. In this case, given a state st, the policy network outputs µ and σ for each action. Then the µ is perturbed by adding a value z, which samples from a Uniform distribution U(−α, α) (lines 13 and 14 in blue). After that, the probability is computed from the new distributionN (µ′, σ), which is eventually used to calculate action log probabilities that are used for computing loss Lπ . In the case of the α hyperparameter, we report results using α = 0.5 unless otherwise specified.
Algorithm 1 Robust Policy Optimization (RPO) 1: Initialize parameter vectors θ for policy network. 2: for each iteration do 3: D ← {} 4: for each environment step do 5: µ, σ ← πθ(.|st) 6: at ∼ N (µ, σ) 7: st+1 ∼ P (st+1|st, at) 8: rt ∼ R(st, at) 9: D ← D ∪ {(st, at, rt, st+1)} 10: end for 11: for each observation st in D do 12: µ, σ ← πθ(.|st) 13: z ∼ U(−α, α) 14: µ′ ← µ+ z 15: prob← N (µ′, σ) 16: logp← prob(at) 17: Compute RL loss Lπ using logp, at, and value function. 18: end for 19: end for
4 EXPERIMENTS
4.1 SETUP
Environments We conducted experiments on continuous control task from four reinforcement learning benchmarks: DeepMind Control (Tunyasuvunakool et al., 2020), OpenAI Gym (Brockman et al., 2016), PyBullet (Coumans & Bai, 2016–2021), and Nvidia IsaacGym (Makoviychuk et al., 2021). These benchmarks contain diverse environments with many different tasks, from low to high-dimensional environments (observations and actions space). Thus our evaluation contains a diverse set of tasks with various difficulties. The IsaacGym contains environments that run in GPU, thus enabling fast simulation, which eventually helps collect a large amount of simulation experience quickly, and faster RL training with GPU enables deep reinforcement learning models.
Baselines We compare our method RPO with the PPO (Schulman et al., 2017) algorithm. Here our method RPO uses the perturbed Gaussian distribution to represent the action output from the policy network, as described in Section 3. In contrast, the PPO uses standard Gaussian distribution to represent its action output. Further, we observe that the data augmentation method can help increase the policy’s entropy by often randomly perturbing observations. This process might improve the performance where higher entropy is preferred. Thus, we compare our method with two data augmentation-based methods: RAD (Laskin et al., 2020), and DRAC (Raileanu et al., 2020). Here, The pure data augmentation baseline RAD uses data processing before passing it to the agent, and the DRAC uses data augmentation to regularize the value and policy network. Both of these data augmentation methods use PPO as their base RL policy. Another common approach to increase entropy is to use the Entropy Regularization in the RL objective. A coefficient determines how much weight the policy would give to the entropy. We observe that various weighting might result in different levels of entropy increment. We use the entropy coefficient 0.0, 0.01, 0.05, 0.5, 1.0, and 10.0 and compare their performance in entropy and, in return, with our algorithm’s RPO. Note that our method does not introduce any additional hyper-parameters and does not use the entropy coefficient hyper-parameters. The implementation details of our algorithm and baselines are in the Appendix. To account for the stochasticity in the environment and policy, we run each experiment several times and report the mean and standard deviation. Unless otherwise specified, we run each experiment with 10 random seeds.
4.2 RESULTS
We compare the return performance of our method with baselines and show their entropy to contrast the results with the entropy of the policy.
Comparison with PPO This evaluation is the direct form of comparison where no method uses any aid (such as data augmentation or entropy regularization) in the entropy value. Figure 2 shows results comparison on DeepMind Control Environments. Results on more environments are in Appendix Figure 11. The entropy comparison is given in the Appendix Figure 12. In most scenarios, our method RPO shows consistent performance improvement in these environments compared to the PPO. In some environments, such as humanoid stand, humanoid run, and hopper hop, the PPO agent fails to learn any useful behavior and thus results in low episodic return. In contrast, our method RPO shows better performance and achieves a much higher episodic return. The RPO also shows a better mean return than the PPO in other environments. In many settings, such as in quadruped (walk, run, and escape), walker (stand, walk, run), fish swim, acrobot swingup, the PPO agents stop improving the performance after around 2M timestep. In contrast, our agent RPO shows consistent improvement over time of the training. This performance gain might be due to the proper management of policy entropy, as in our setup, the agent is encouraged to keep exploring as the training progress. On the other hand, the PPO agent might settle in a sub-optimal performance as the policy entropy, in this case, decreases as the agent trains for more timesteps. These results show the effectiveness of our method in diverse control tasks with varying complexity.
Results of OpenAI Gym environments: Pendulum and BipedalWalker are in Figure 3. Overall, our method, RPO, performs better compared to the PPO. In Pendulum environments, the PPO agent fails to learn any useful behavior in this setup. In contrast, RPO consistently learns with the increase in timestep and eventually learns the task. We see the policy entropy of RPO increases initially and eventually remains at a certain threshold, which might help the policy to stay exploratory and collect more data. In contrast, the PPO policy entropy decreases over time, and thus eventually, the performance remains the same, and the policy stops learning. This scenario might contribute to the bad performance of the PPO.
In the BipedalWalker environment, we see that both PPO and RPO learn up to a certain reward quickly. However, as we keep training both policies, we observe that after a certain period, the PPO’s performance drops and even starts to become worse. In contrast, the RPO stays robust as we train for more and eventually keep improving the performance. These results show the robustness of our method when ample train time is available. The entropy plot shows a similar pattern, as PPO decreases entropy over time, and RPO keeps the entropy at a certain threshold.
Figure 4 shows results comparison on PyBullet environments: Ant, and Minituar. We observe that our method RPO performs better than the PPO in the Ant environment. In the Minitaur environment, PPO quickly (at around 2M) learns up to a certain reward and remains on the same performance as time progresses. In contrast, RPO starts from a lower performance, eventually surpassing the PPO’s performance as time progresses. These results show the robustness of consistently improving the
policy of the RPO method. The entropy pattern remains the same in both cases; PPO reduces entropy while RPO keeps the entropy at a certain threshold which it learns automatically in an environment.
Figure 5 shows results comparison on IsaacGym environments: Cartpole, and BallBalance. In this setup, we run the simulation up to 100M timesteps which take around 30 minutes for each run in each environment in a Quadro RTX 4000 GPU. We see that for Cartpole, both PPO and RPO learn the reward quickly, around 450. However, as we kept training for a long, the performance of PPO started to degrade over time, and the policy entropy kept decreasing. On the other hand, our RPO agent keeps improving the performance; notably, the performance never degrades over time. The policy entropy shows that the entropy remains at a threshold. This exploratory nature of RPO’s policy might help keep learning and get better rewards. These results show the robustness of our method RPO over PPO even when an abundance of simulation is available. Interestingly, more simulation data might not always be good for RL agents. In our setup, the PPO even suffers from further training in the Cartpole environment. In the BallBalance environment (results are averaged over 3 random seed runs), our method RPO achieves a slight performance improvement over PPO. Overall, our method RPO performs better than the PPO in the two IsaacGym environments.
Comparison with Data Augmentation. In results on all DeepMind Control environments are shown in Table 1. Our method performs better in mean episodic return in most environments than PPO and other data augmentation baselines RAD and DRAC.
The return and policy entropy curve are shown in Appendix, Figure 10. We observe that the data augmentation slightly improved the base PPO algorithms, and the policy entropy shows higher than the base PPO. However, our method RPO still maintains a better mean return than all the baselines. The entropy of our method shows an increase at the initial timestep of the training. However, it eventually becomes stable at a particular value. The data augmentation method, especially RAD, shows an increase in entropy throughout the training process. However, this increase does not translate to the return performance. Moreover, data augmentation assumes some prior knowledge about the environment observation, and selecting a suitable data augmentation method requires domain knowledge. Improper handling of the data augmentation may result in worsen performance.
In addition, the augmentation method poses an additional processing time which can contribute to longer training time. In contrast, our method RPO does not require such domain knowledge and is thus readily applicable to any RL environment. Moreover, in return performance, our method shows better performance compared to the base PPO and these data augmentation methods.
Comparison with Entropy Regularization
Due to the variety in the environments, the entropy requirement might be different. Thus, improper setting of the entropy coefficient might result in bad training performance. In the tested environments, we observe e that sometimes the performance improves with a proper setup of tuned coefficient value and often worsens the performance (in Figure 6). However, our method RPO consistently performs better than PPO and the different entropy coefficient baselines. Importantly, our method does not use these coefficient hyper-parameters and still consistently performs better in various environments, which shows our method’s robustness in various environment variability.
We observe that in some environments, the coefficient 0.01 improves the performance of standard PPO with coefficient 0.0 while increasing the entropy. However, an increase in entropy to 0.05 and
above results in an unbounded entropy increase. Thus, the performance worsens in most scenarios where the agent fails to achieve a reasonable return. Results with all the coefficient variants in the Appendix Figure 9. Overall, our method RPO achieves better performance in the evaluated environments in our setup. Moreover, our method does not use the entropy coefficient hyperparameter and controls the entropy level automatically in each environment.
Ablation Study We conducted experiments on the α value ranges in the Uniform distribution. Figure 7 shows the return and policy comparison. We observe that the value of α affects the policy entropy and, thus, return performance. A smaller value of α (e.g., 0.001) seems to behave similarly to PPO, where policy entropy decreases over time, thus hampered performance. Higher entropy values, such as 1000.0, make the policy somewhat random as the uniform distribution dominates over the Gaussian distribution (as in Algorithm 1. This scenario keeps the entropy somewhat at a constant level; thus, the performance is hampered. Overall, a value between 0.1 to 3 often results in better performance. Due to overall performance advantage, in this paper, we report results with α = 0.5 for all 18 environments. Learning curve for more environments are in Appendix Figure 13.
5 RELATED WORK
Since the inception of the practical policy optimization method PPO (TRPO and Natural Policy Gradient), several studies have investigated different algorithmic components (Engstrom et al., 2019; Andrychowicz et al., 2020; Ahmed et al., 2019; Ilyas et al., 2019). Entropy regularization enjoys faster convergence and improves policy optimization (Williams & Peng, 1991; Mei et al., 2020; Mnih et al., 2016; Ahmed et al., 2019). However, some empirical findings observe the difficulty in setting proper entropy setup and observe no performance gain in many environments (Andrychowicz et al., 2020). Gaussian distribution is commonly used to represent continuous action (Schulman et al., 2015; 2017). Thus many follow-ups to implementation also use this Gaussian distribution as s default setup (Huang et al., 2022; 2021). In contrast, in this paper, we propose perturbed Gaussian distribution to represent the continuous action to maintain policy entropy. Furthermore, in this paper, we showed the empirical advantage of our method in several benchmark environments compared to the standard Gaussian-based method. Due to space limitations, we moved some related work discussions to the Appendix.
6 CONCLUSION
This paper investigates the effect of entropy in policy optimization-based reinforcement learning. We design a method that, when applied to a policy network, can keep the entropy of the policy to a certain threshold through training. Our approach uses a perturbed Gaussian distribution by a random number sample from a uniform distribution to represent the policy. We observe that keeping a certain entropy threshold throughout using our method can help better policy learning. Our proposed algorithm Robust Policy Optimization (RPO), performs better than the standard PPO on 18 continuous control tasks from four RL benchmarks, DeepMind Control, OpenAI Gym, Pybullet, and IsaacGym. Further, our method performs better in many settings than other approaches, encouraging higher entropy, such as data augmentation and entropy regularization. Overall, we show our method’s effectiveness and robustness where standard PPO or other baseline methods fail to maintain or even worsen the performance over time.
A APPENDIX
A.1 ADDITIONAL RELATED WORK
Another form of approach which adds entropy to the policy is data augmentation. We observe that the policy entropy often increases when the data is augmented often by some form of random processes. This improvement in entropy can eventually lead to better exploration in some environments. However, the type of data augmentation might be environment specific and thus requires domain knowledge as to what kind of data augmentation can be applied. In contrast, to this method, our method maintains an entropy threshold during entire policy learning. Moreover, we evaluated our algorithm with two kinds of data augmentation-based methods RAD (Laskin et al., 2020) and DRAC (Raileanu et al., 2020). Finally, we evaluate with an effective data augmentation approach random amplitude modulation, which was found to be worked well in vector-based states set up.
Many other forms of improvement have been investigated in policy optimization, which results in better empirical success, such as Generalized Advantage Estimation (Schulman et al., 2016), Normalization of Advantages (Andrychowicz et al., 2020), and clipped policy and value objective (Schulman et al., 2017; Engstrom et al., 2019; Andrychowicz et al., 2020). These improvements eventually led to overall improvement and are now used in many standard implementations such as (Huang et al., 2022; 2021). In our implementation, we leverage these implementation tricks when possible. Thus our method works on top of these essential improvements. Furthermore, our method is complementary to these approaches and thus can be used along with this method. In this paper, our implementation of baselines also used these implementation tricks, which is for a fair comparison.
A.2 EXPERIMENTS
Implementation Details: Our algorithm and the baselines are based on the PPO (Schulman et al., 2017) implementation available in (Huang et al., 2022; 2021). This implementation incorporated many important advancements from existing literature in recent years on policy gradient (e.g., Orthogonal Initialization, GAE, Entropy Regularization). We refer reader to (Huang et al., 2022) for further references. The pure data augmentation baseline RAD (Laskin et al., 2020) uses data processing before passing it to the agent, and the DRAC (Raileanu et al., 2020) uses data augmentation to regularize the loss of value and policy network. We experimented with vector-based states and used random amplitude scaling proposed in RAD (Laskin et al., 2020) as a data augmentation method for RAD and DRAC. In the random amplitude scaling, the state values are multiplied with random values generated uniformly between a range α to β. We used the suggested (Laskin et al., 2020) and better performing range α = 0.6 to β = 1.2 for all the experiments. Moreover, both RAD and DRAC use PPO as their base algorithm. However, our RPO method does not use any form of data augmentation.
We use the hyperparameters reported in the PPO implementation of continuous action spaces (Huang et al., 2022; 2021), which incorporate best practices in the continuous control task. Furthermore, to mitigate the effect of hyperparameters choice, we keep them the same for all the environments. Further, we keep the same hyperparameters for all agents for a fair comparison. The common hyperparameters can be found in Table 2.
Entropy Regularization The results comparison of RPO with entropy coefficient are in Figure 8
Data augmentation return curve and entropy: Figure 10 shows data augmentation return curve and entropy comparison with RPO.
Results on DeepMind Control is in Figure 11.
Entropy Plot for DeepMind Control is in Figure 12.
Ablation Study Ablation on α values of RPO are shown in Figure 13. | 1. What is the focus of the paper regarding reinforcement learning?
2. What are the strengths and weaknesses of the proposed approach, particularly in its difference from existing methods?
3. Do you have any concerns or questions about the experimental evaluation, including the choice of comparison methods and the absence of certain relevant comparisons?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper argues for stochastic policies that maintain high entropy for high RL performance. In particular, the authors propose to randomly perturb the mean of the output Gaussian distribution to produce more diverse actions. In the experimental evaluation, this perturbation scheme is compared to PPO (with and without entropy regularization) and data augmentation techniques including RAD and DRAC.
Strengths And Weaknesses
The paper is overall written well. However, it’s not clear to me how adding action noise is different from existing ways of incorporating entropy regularization. For example, this is equivalent to enforcing that each component of \sigma is at least some threshold. This clipping already exists in many PPO implementations I believe.
I also have some questions and concerns in regards to the experimental evaluation:
Figure 2 shows that PPO fails to learn in these environments, but this is probably due to the entropy regularization being removed. A fairer comparison would be to PPO with the standard entropy regularization.
The comparison to the data augmentation methods is a bit confusing to me since these are methods that perform image augmentation for image-based RL tasks. Are the experimental tasks being solved from images?
There seems to be more useful comparisons to include, such as Soft Actor-Critic, which optimizes the maximum-entropy RL objective and automatically tunes the entropy coefficient, and exploration strategies for RL, such as Random Network Distillation.
What are the action spaces for each environment? It is my understanding that each action component is between -1 and 1 for the DM Control Suite tasks. Wouldn’t alpha values larger than or equal to 1 essentially make the actions uniform-random?
Clarity, Quality, Novelty And Reproducibility
Clarity/Quality/Novelty: See questions under “Strengths and Weaknesses.”
Reproducibility: The paper includes pseudo-code and hyperparameter values used to implement their algorithm. I believe a reader could re-implement the proposed modification from these details. |
ICLR | Title
Defending Against Physically Realizable Attacks on Image Classification
Abstract
We study the problem of defending deep neural network approaches for image classification from physically realizable attacks. First, we demonstrate that the two most scalable and effective methods for learning robust models, adversarial training with PGD attacks and randomized smoothing, exhibit limited effectiveness against three of the highest profile physical attacks. Next, we propose a new abstract adversarial model, rectangular occlusion attacks, in which an adversary places a small adversarially crafted rectangle in an image, and develop two approaches for efficiently computing the resulting adversarial examples. Finally, we demonstrate that adversarial training using our new attack yields image classification models that exhibit high robustness against the physically realizable attacks we study, offering the first effective generic defense against such attacks. 1
1 INTRODUCTION
State-of-the-art effectiveness of deep neural networks has made it the technique of choice in a variety of fields, including computer vision (He et al., 2016), natural language processing (Sutskever et al., 2014), and speech recognition (Hinton et al., 2012). However, there have been a myriad of demonstrations showing that deep neural networks can be easily fooled by carefully perturbing pixels in an image through what have become known as adversarial example attacks (Szegedy et al., 2014; Goodfellow et al., 2015; Carlini & Wagner, 2017b; Vorobeychik & Kantarcioglu, 2018). In response, a large literature has emerged on defending deep neural networks against adversarial examples, typically either proposing techniques for learning more robust neural network models (Wong & Kolter, 2018; Wong et al., 2018; Raghunathan et al., 2018b; Cohen et al., 2019; Madry et al., 2018), or by detecting adversarial inputs (Metzen et al., 2017; Xu et al., 2018).
Particularly concerning, however, have been a number of demonstrations that implement adversarial perturbations directly in physical objects that are subsequently captured by a camera, and then fed through the deep neural network classifier (Boloor et al., 2019; Eykholt et al., 2018; Athalye et al., 2018b; Brown et al., 2018; Bhattad et al., 2020). Among the most significant of such physical attacks on deep neural networks are three that we specifically consider here: 1) the attack which fools face recognition by using adversarially designed eyeglass frames (Sharif et al., 2016), 2) the attack which fools stop sign classification by adding adversarially crafted stickers (Eykholt et al., 2018), and 3) the universal adversarial patch attack, which causes targeted misclassification of any object with the adversarially designed sticker (patch) (Brown et al., 2018). Oddly, while considerable attention has been devoted to defending against adversarial perturbation attacks in the digital space, there are no effective methods specifically to defend against such physical attacks.
Our first contribution is an empirical evaluation of the effectiveness of conventional approaches to robust ML against two physically realizable attacks: the eyeglass frame attack on face recognition (Sharif et al., 2016) and the sticker attack on stop signs (Eykholt et al., 2018). Specifically, we study the performance on adversarial training and randomized smoothing against these attacks, and show that both have limited effectiveness in this context (quite ineffective in some settings, and somewhat more effective, but still not highly robust, in others), despite showing moderate effectiveness against l∞ and l2 attacks, respectively.
1The code can be found at https://github.com/tongwu2020/phattacks
Our second contribution is a novel abstract attack model which more directly captures the nature of common physically realizable attacks than the conventional lp-based models. Specifically, we consider a simple class of rectangular occlusion attacks in which the attacker places a rectangular sticker onto an image, with both the location and the content of the sticker adversarially chosen. We develop several algorithms for computing such adversarial occlusions, and use adversarial training to obtain neural network models that are robust to these. We then experimentally demonstrate that our proposed approach is significantly more robust against physical attacks on deep neural networks than adversarial training and randomized smoothing methods that leverage lp-based attack models.
Related Work While many approaches for defending deep learning in vision applications have been proposed, robust learning methods have been particularly promising, since alternatives are often defeated soon after being proposed (Madry et al., 2018; Raghunathan et al., 2018a; Wong & Kolter, 2018; Vorobeychik & Kantarcioglu, 2018). The standard solution approach for this problem is an adaptation of Stochastic Gradient Descent (SGD) where gradients are either with respect to the loss at the optimal adversarial perturbation for each i (or approximation thereof, such as using heuristic local search (Goodfellow et al., 2015; Madry et al., 2018) or a convex over-approximation (Raghunathan et al., 2018b; Wang et al., 2018)), or with respect to the dual of the convex relaxation of the attacker maximization problem (Raghunathan et al., 2018a; Wong & Kolter, 2018; Wong et al., 2018). Despite these advances, adversarial training a la Madry et al. (2018) remains the most practically effective method for hardening neural networks against adversarial examples with l∞-norm perturbation constraints. Recently, randomized smoothing emerged as another class of techniques for obtaining robustness (Lecuyer et al., 2019; Cohen et al., 2019), with the strongest results in the context of l2-norm attacks. In addition to training neural networks that are robust by construction, a number of methods study the problem of detecting adversarial examples (Metzen et al., 2017; Xu et al., 2018), with mixed results (Carlini & Wagner, 2017a). Of particular interest is recent work on detecting physical adversarial examples (Chou et al., 2018). However, detection is inherently weaker than robustness, which is our goal, as even perfect detection does not resolve the question of how to make decisions on adversarial examples. Finally, our work is in the spirit of other recent efforts that characterize robustness of neural networks to physically realistic perturbations, such as translations, rotations, blurring, and contrast (Engstrom et al., 2019; Hendrycks & Dietterich, 2019).
2 BACKGROUND
2.1 ADVERSARIAL EXAMPLES IN THE DIGITAL AND PHYSICAL WORLD
Adversarial examples involve modifications of input images that are either invisible to humans, or unsuspicious, and that cause systematic misclassification by state-of-the-art neural networks (Szegedy et al., 2014; Goodfellow et al., 2015; Vorobeychik & Kantarcioglu, 2018). Commonly, approaches for generating adversarial examples aim to solve an optimization problem of the following form:
argmax δ L(f(x+ δ;θ), y) s.t. ‖δ‖p ≤ , (1)
where x is the original input image, δ is the adversarial perturbation, L(·) is the adversary’s utility function (for example, the adversary may wish to maximize the cross-entropy loss), and ‖ · ‖p is some lp norm. While a host of such digital attacks have been proposed, two have come to be viewed as state of the art: the attack developed by Carlini & Wagner (2017b), and the projected gradient descent attack (PGD) by Madry et al. (2018).
While most of the work to date has been on attacks which modify the digital image directly, we focus on a class of physical attacks which entail modifying the actual object being photographed in order to fool the neural network that subsequently takes its digital representation as input. The attacks we will focus on will have three characteristics:
1. The attack can be implemented in the physical space (e.g., modifying the stop sign); 2. the attack has low suspiciousness; this is operationalized by modifying only a small part of
the object, with the modification similar to common “noise” that obtains in the real world; for example, stickers on a stop sign would appear to most people as vandalism, but covering the stop sign with a printed poster would look highly suspicious; and
3. the attack causes misclassification by state-of-the-art deep neural network.
Since our ultimate purpose is defense, we will not concern ourselves with the issue of actually implementing the physical attacks. Instead, we will consider the digital representation of these attacks, ignoring other important issues, such as robustness to many viewpoints and printability. For example, in the case where the attack involves posting stickers on a stop sign, we will only be concerned with simulating such stickers on digital images of stop signs. For this reason, we refer to such attacks physically realizable attacks, to allude to the fact that it is possible to realize them in practice. It is evident that physically realizable attacks represent a somewhat stronger adversarial model than their actual implementation in the physical space. Henceforth, for simplicity, we will use the terms physical attacks and physically realizable attacks interchangeably.
We consider three physically realizable attacks. The first is the attack on face recognition by Sharif et al. (2016), in which the attacker adds adversarial noise inside printed eyeglass frames that can subsequently be put on to fool the deep neural network (Figure 1a). The second attack posts adversarially crafted stickers on a stop sign to cause it to be misclassified as another road sign, such as the speed limit sign (Figure 1b) (Eykholt et al., 2018). The third, adversarial patch, attack designs a patch (a sticker) with adversarial noise that can be placed onto an arbitrary object, causing that object to be misclassified by a deep neural network (Brown et al., 2018).
2.2 ADVERSARIALLY ROBUST DEEP LEARNING
While numerous approaches have been proposed for making deep learning robust, many are heuristic and have soon after been defeated by more sophisticated attacks (Carlini & Wagner, 2017b; He et al., 2017; Carlini & Wagner, 2017a; Athalye et al., 2018a). Consequently, we focus on principled approaches for defense that have not been broken. These fall broadly into two categories: robust learning and randomized smoothing. We focus on a state-of-the-art representative from each class.
Robust Learning The goal of robust learning is to minimize a robust loss, defined as follows:
θ∗ = argmin θ E (x,y)∼D [ max ‖δ‖p≤ L(f(x+ δ;θ), y) ] , (2)
where D denotes the training data set. In itself this is a highly intractable problem. Several techniques have been developed to obtain approximate solutions. Among the most effective in practice is the adversarial training approach by Madry et al. (2018), who use the PGD attack as an approximation to the inner optimization problem, and then take gradient descent steps with respect to the associated adversarial inputs. In addition, we consider a modified version of this approach termed curriculum adversarial training (Cai et al., 2018). Our implementation of this approach proceeds as follows: first, apply adversarial training for a small , then increase and repeat adversarial training, and so on, increasing until we reach the desired level of adversarial noise we wish to be robust to.
Randomized Smoothing The second class of techniques we consider works by adding noise to inputs at both training and prediction time. The key idea is to construct a smoothed classifier g(·) from a base classifier f(·) by perturbing the input x with isotropic Gaussian noise with variance σ. The prediction is then made by choosing a class with the highest probability measure with respect to the induced distribution of f(·) decisions:
g(x) = argmax c P (f(x+ σ) = c), σ ∼ N
( 0, σ2I ) . (3)
To achieve provably robust classification in this manner one typically trains the classifier f(·) by adding Gaussian noise to inputs at training time (Lecuyer et al., 2019; Cohen et al., 2019).
3 ROBUSTNESS OF CONVENTIONAL ROBUST ML METHODS AGAINST PHYSICAL ATTACKS
Most of the approaches for endowing deep learning with adversarial robustness focus on adversarial models in which the attacker introduces lp-bounded adversarial perturbations over the entire input. Earlier we described two representative approaches in this vein: adversarial training, commonly focused on robustness against l∞ attacks, and randomized smoothing, which is most effective against l2 attacks (although certification bounds can be extended to other lp norms as well). We call these methods conventional robust ML.
In this section, we ask the following question:
Are conventional robust ML methods robust against physically realizable attacks?
This is similar to the question was asked in the context of malware classifier evasion by Tong et al. (2019), who found that lp-based robust ML methods can indeed be successful in achieving robustness against realizable evasion attacks. Ours is the first investigation of this issue in computer vision applications and for deep neural networks, where attacks involve adversarial masking of objects.2
We study this issue experimentally by considering two state-of-the-art approaches for robust ML: adversarial training a-la-Madry et al. (2018), along with its curriculum learning variation (Cai et al., 2018), and randomized smoothing, using the implementation by Cohen et al. (2019). These approaches are applied to defend against two physically realizable attacks described in Section 2.1: an attack on face recognition which adds adversarial eyeglass frames to faces (Sharif et al., 2016), and an attack on stop sign classification which adds adversarial stickers to a stop sign to cause misclassification (Eykholt et al., 2018).
We consider several variations of adversarial training, as a function of the l∞ bound, , imposed on the adversary. Just as Madry et al. (2018), adversarial instances in adversarial training were generated using PGD. We consider attacks with ∈ {4, 8} (adversarial training failed to make progress when we used = 16). For curriculum adversarial training, we first performed adversarial training with = 4, then doubled to 8 and repeated adversarial training with the model robust to = 4, then doubled again, and so on. In the end, we learned models for ∈ {4, 8, 16, 32}. For all versions of adversarial training, we consider 7 and 50 iterations of the PGD attack. We used the learning rate of /4 for the former and 1 for the latter. In all cases, pixels are in 0 ∼ 255 range and retraining was performed for 30 epochs using the ADAM optimizer.
For randomized smoothing, we consider noise levels σ ∈ {0.25, 0.5, 1} as in Cohen et al. (2019), and take 1,000 Monte Carlo samples at test time.
3.1 ADVERSARIAL EYEGLASSES IN FACE RECOGNITION
We applied white-box dodging (untargeted) attacks on the face recognition systems (FRS) from Sharif et al. (2016). We used both the VGGFace data and transferred VGGFace CNN model for the face recognition task, subselecting 10 individuals, with 300-500 face images for each. Further details about the dataset, CNN architecture, and training procedure are in Appendix A. For the attack, we used identical frames as in Sharif et al. (2016) occupying 6.5% of the pixels. Just as Sharif et al. (2016), we compute attacks (that is, adversarial perturbations inside the eyeglass frame area) by using the learning rate 20 as well as momentum value 0.4, and vary the number of attack iterations between 0 (no attack) and 300.
Figure 2 presents the results of classifiers obtained from adversarial training (left) as well as curriculum adversarial training (middle), in terms of accuracy (after the attack) as a function of the number of iterations of the Sharif et al. (2016) eyeglass frame attack. First, it is clear that none of the variations of adversarial training are particularly effective once the number of physical attack iterations is
2Several related efforts study robustness of deep neural networks to other variants of physically realistic perturbations (Engstrom et al., 2019; Hendrycks & Dietterich, 2019).
above 20. The best performance in terms of adversarial robustness is achieved by adversarial training with = 8, for approaches using either 7 or 50 PGD iterations (the difference between these appears negligible). However, non-adversarial accuracy for these models is below 70%, a ∼20% drop in accuracy compared to the original model. Moreover, adversarial accuracy is under 40% for sufficiently strong physical attacks. Curriculum adversarial training generally achieves significantly higher non-adversarial accuracy, but is far less robust, even when trained with PGD attacks that use = 32.
Figure 2 (right) shows the performance of randomized smoothing when faced with the eyeglass frames attack. It is readily apparent that randomized smoothing is ineffective at deflecting this physical attack: even as we vary the amount of noise we add, accuracy after attacks is below 20% even for relatively weak attacks, and often drops to nearly 0 for sufficiently strong attacks.
3.2 ADVERSARIAL STICKERS ON STOP SIGNS
Following Eykholt et al. (2018), we use the LISA traffic sign dataset for our experiments, and 40 stop signs from this dataset as our test data and perform untargeted attacks (this is in contrast to the original work, which is focused on targeted attacks). For the detailed description of the data and the CNN used for traffic sign prediction, see Appendix A. We apply the same settings as in the original attacks and use ADAM optimizer with the same parameters. Since we observed few differences in performance between running PGD for 7 vs. 50 iterations, adversarial training methods in this section all use 7 iterations of PGD.
Again, we begin by considering adversarial training (Figure 3, left and middle). In this case, both the original and curriculum versions of adversarial training with PGD are ineffective when = 32 (error rates on clean data are above 90%); these are consequently omitted from the plots. Curriculum adversarial training with = 16 has the best performance on adversarial data, and works well on clean data. Surprisingly, most variants of adversarial training perform at best marginally better than the original model against the stop sign attack. Even the best variant has relatively poor performance, with robust accuracy under 50% for stronger attacks.
Figure 3 (right) presents the results for randomized smoothing. In this set of experiments, we found that randomized smoothing performs inconsistently. To address this, we used 5 random seeds to repeat the experiments, and use the resulting mean values in the final results. Here, the best variant
uses σ = 0.25, and, unlike experiments with the eyeglass frame attack, significantly outperforms adversarial training, reaching accuracy slightly above 60% even for the stronger attacks. Nevertheless, even randomized smoothing results in significant degradation of effectiveness on adversarial instances (nearly 40%, compared to clean data).
3.3 DISCUSSION
There are two possible reasons why conventional robust ML perform poorly against physical attacks: 1) adversarial models involving lp-bounded perturbations are too hard to enable effective robust learning, and 2) the conventional attack model is too much of a mismatch for realistic physical attacks. In Appendix B, we present evidence supporting the latter. Specifically, we find that conventional robust ML models exhibit much higher robustness when faced with the lp-bounded attacks they are trained to be robust to.
4 PROPOSED APPROACH: DEFENSE AGAINST OCCLUSION ATTACKS (DOA)
As we observed in Section 3, conventional models for making deep learning robust to attack can perform quite poorly when confronted with physically realizable attacks. In other words, the evidence strongly suggests that the conventional models of attacks in which attackers can make lp-bounded perturbations to input images are not particularly useful if one is concerned with the main physical threats that are likely to be faced in practice. However, given the diversity of possible physical attacks one may perpetrate, is it even possible to have a meaningful approach for ensuring robustness against a broad range of physical attacks? For example, the two attacks we considered so far couldn’t be more dissimilar: in one, we engineer eyeglass frames; in another, stickers on a stop sign. We observe that the key common element in these attacks, and many other physical attacks we may expect to encounter, is that they involve the introduction of adversarial occlusions to a part of the input. The common constraint faced in such attacks is to avoid being suspicious, which effectively limits the size of the adversarial occlusion, but not necessarily its shape or location. Next, we introduce a simple abstract model of occlusion attacks, and then discuss how such attacks can be computed and how we can make classifiers robust to them.
4.1 ABSTRACT ATTACK MODEL: RECTANGULAR OCCLUSION ATTACKS (ROA)
We propose the following simple abstract model of adversarial occlusions of input images. The attacker introduces a fixed-dimension rectangle. This rectangle can be placed by the adversary anywhere in the image, and the attacker can furthermore introduce l∞ noise inside the rectangle with an exogenously specified high bound (for example, = 255, which effectively allows addition of arbitrary adversarial noise). This model bears some similarity to l0 attacks, but the rectangle imposes a contiguity constraint, which reflects common physical limitations. The model is clearly abstract: in practice, for example, adversarial occlusions need not be rectangular or have fixed dimensions (for example, the eyeglass frame attack is clearly not rectangular), but at the same time cannot usually be arbitrarily superimposed on an image, as they are implemented in the physical environment. Nevertheless, the model reflects some of the most important aspects common to many physical attacks, such as stickers placed on an adversarially chosen portion of the object we wish to identify. We call our attack model a rectangular occlusion attack (ROA). An important feature of this attack is that it is untargeted: since our ultimate goal is to defend against physical attacks whatever their target, considering untargeted attacks obviates the need to have precise knowledge about the attacker’s goals. For illustrations of the ROA attack, see Appendix C.
4.2 COMPUTING ATTACKS
The computation of ROA attacks involves 1) identifying a region to place the rectangle in the image, and 2) generating fine-grained adversarial perturbations restricted to this region. The former task can be done by an exhaustive search: consider all possible locations for the upper left-hand corner of the rectangle, compute adversarial noise inside the rectangle using PGD for each of these, and choose the worst-case attack (i.e., the attack which maximizes loss computed on the resulting image). However, this approach would be quite slow, since we need to perform PGD inside the rectangle for every possible position. Our approach, consequently, decouples these two tasks. Specifically, we first
perform an exhaustive search using a grey rectangle to find a position for it that maximizes loss, and then fix the position and apply PGD inside the rectangle.
An important limitation of the exhaustive search approach for ROA location is that it necessitates computations of the loss function for every possible location, which itself requires full forward propagation each time. Thus, the search itself is still relatively slow. To speed the process up further, we use the gradient of the input image to identify candidate locations. Specifically, we select a subset of C locations for the sticker with the highest magnitude of the gradient, and only exhaustively search among these C locations. C is exogenously specified to be small relative to the number of pixels in the image, which significantly limits the number of loss function evaluations. Full details of our algorithms for computing ROA are provided in Appendix D.
4.3 DEFENDING AGAINST ROA
Once we are able to compute the ROA attack, we apply the standard adversarial training approach for defense. We term the resulting classifiers robust to our abstract adversarial occlusion attacks Defense against Occlusion Attacks (DOA), and propose these as an alternative to conventional robust ML for defending against physical attacks. As we will see presently, this defense against ROA is quite adequate for our purposes.
5 EFFECTIVENESS OF DOA AGAINST PHYSICALLY REALIZABLE ATTACKS
We now evaluate the effectiveness of DOA—that is, adversarial training using the ROA threat model we introduced—against physically realizable attacks (see Appendix G for some examples that defeat conventional methods but not DOA). Recall that we consider only digital representations of the corresponding physical attacks. Consequently, we can view our results in this section as a lower bound on robustness to actual physical attacks, which have to deal with additional practical constraints, such as being robust to multiple viewpoints. In addition to the two physical attacks we previously considered, we also evaluate DOA against the adversarial patch attack, implemented on both face recognition and traffic sign data.
5.1 DOA AGAINST ADVERSARIAL EYEGLASSES
We consider two rectangle dimensions resulting in comparable area: 100 × 50 and 70 × 70, both in pixels. Thus, the rectangles occupy approximately 10% of the 224 × 224 face images. We used {30, 50} iterations of PGD with = 255/2 to generate adversarial noise inside the rectangle, and with learning rate α = {8, 4} correspondingly. For the gradient version of ROA, we chooseC = 30. DOA adversarial training is performed for 5 epochs with a learning rate of 0.0001.
Figure 4 (left) presents the results comparing the effectiveness of DOA against the eyeglass frame attack on face recognition to adversarial training and randomized smoothing (we took the most robust variants of both of these). We can see that DOA yields significantly more robust classifiers for this domain. The gradient-based heuristic does come at some cost, with performance slightly worse
than when we use exhaustive search, but this performance drop is relatively small, and the result is still far better than conventional robust ML approaches. Figure 4 (middle and right) compares the performance of DOA between two rectangle variants with different dimensions. The key observation is that as long as we use enough iterations of PGD inside the rectangle, changing its dimensions (keeping the area roughly constant) appears to have minimal impact.
5.2 DOA AGAINST THE STOP SIGN ATTACK
We now repeat the evaluation with the traffic sign data and the stop sign attack. In this case, we used 10× 5 and 7× 7 rectangles covering ∼5 % of the 32× 32 images. We set C = 10 for the gradientbased ROA. Implementation of DOA is otherwise identical as in the face recognition experiments above.
We present our results using the square rectangle, which in this case was significantly more effective; the results for the 10 × 5 rectangle DOA attacks are in Appendix F. Figure 5 (left) compares the effectiveness of DOA against the stop sign attack on traffic sign data with the best variants of adversarial training and randomized smoothing. Our results here are for 30 iterations of PGD; in Appendix F, we study the impact of varying the number of PGD iterations. We can observe that DOA is again significantly more robust, with robust accuracy over 90% for the exhaustive search variant, and ∼85% for the gradient-based variant, even for stronger attacks. Moreover, DOA remains 100% effective at classifying stop signs on clean data, and exhibits ∼95% accuracy on the full traffic sign classification task.
5.3 DOA AGAINST ADVERSARIAL PATCH ATTACKS
Finally, we evaluate DOA against the adversarial patch attacks. In these attacks, an adversarial patch (e.g., sticker) is designed to be placed on an object with the goal of inducing a target prediction. We study this in both face recognition and traffic sign classification tasks. Here, we present the results for face recognition; further detailed results on both datasets are provided in Appendix F.
As we can see from Figure 5 (right), adversarial patch attacks are quite effective once the attack region (fraction of the image) is 10% or higher, with adversarial training and randomized smoothing both performing rather poorly. In contrast, DOA remains highly robust even when the adversarial patch covers 20% of the image.
6 CONCLUSION
As we have shown, conventional methods for making deep learning approaches for image classification robust to physically realizable attacks tend to be relatively ineffective. In contrast, a new threat model we proposed, rectangular occlusion attacks (ROA), coupled with adversarial training, achieves high robustness against several prominent examples of physical attacks. While we explored a number of variations of ROA attacks as a means to achieve robustness against physical attacks, numerous questions remain. For example, can we develop effective methods to certify robustness against ROA, and are the resulting approaches as effective in practice as our method based on a combination of heuristically computed attacks and adversarial training? Are there other types of occlusions that are more effective? Answers to these and related questions may prove a promising
path towards practical robustness of deep learning when deployed for downstream applications of computer vision such as autonomous driving and face recognition.
ACKNOWLEDGMENTS
This work was partially supported by the NSF (IIS-1905558, IIS-1903207), ARO (W911NF-19-10241), and NVIDIA.
A DESCRIPTION OF DATASETS AND DEEP LEARNING CLASSIFIERS
A.1 FACE RECOGNITION
The VGGFace dataset3 (Parkhi et al., 2015) is a benchmark for face recognition, containing 2622 subjusts with 2.6 million images in total. We chose ten subjects: A. J. Buckley, A. R. Rahman, Aamir Khan, Aaron Staton, Aaron Tveit, Aaron Yoo, Abbie Cornish, Abel Ferrara, Abigail Breslin, and Abigail Spencer, and subselected face images pertaining only to these individuals. Since approximately half of the images cannot be downloaded, our final dataset contains 300-500 images for each subject.
We used the standard corp-and-resize method to process the data to be 224 × 224 pixels, and split the dataset into training, validation, and test according to a 7:2:1 ratio for each subject. In total, the data set has 3178 images in the training set, 922 images in the validation set, and 470 images in the test set.
We use the VGGFace convolutional neural network (Parkhi et al., 2015) model, a variant of the VGG16 model containing 5 convolutional layer blocks and 3 fully connected layers. We make use of standard transfer learning as we only classify 10 subjects, keeping the convolutional layers as same as VGGFace structure,4 but changing the fully connected layer to be 1024 → 1024 →10 instead of 4096→ 4096→2622. Specifically, in our Pytorch implementation, we convert the images from RGB to BGR channel orders and subtract the mean value [129.1863, 104.7624, 93.5940] in order to use the pretrained weights from VGG-Face on convolutional layers. We set the batch size to be 64 and use Pytorch built-in Adam Optimizer with an initial learning rate of 10−4 and default parameters in Pytorch.5 We drop the learning rate by 0.1 every 10 epochs. Additionally, we used validation set accuracy to keep track of model performance and choose a model in case of overfitting. After 30 epochs of training, the model successfully obtains 98.94 % on test data.
A.2 TRAFFIC SIGN CLASSIFICATION
To be consistent with (Eykholt et al., 2018), we select the subset of LISA which contains 47 different U.S. traffic signs (Møgelmose et al., 2012). To alleviate the problem of imbalance and extremely blurry data, we picked 16 best quality signs with 3509 training and 1148 validation data points. From the validation data, we obtain the test data that includes only 40 stop signs to evaluate performance with respect to the stop sign attack, as done by Eykholt et al. (2018). In the main body of the paper, we present results only on this test data to evaluate robustness to stop sign attacks. In the appendix below, we also include performance on the full validation set without adversarial manipulation.
All the data was processed by standard crop-and-resize to 32× 32 pixels. We use the LISA-CNN architecture defined in (Eykholt et al., 2018), and construct a convolutional neural network containing three convolutional layers and one fully connected layer. We use the Adam Optimizer with initial learning rate of 10−1 and default parameters 5, dropping the learning rate by 0.1 every 10 epochs. We set the batch size to be 128. After 30 epochs, we achieve the 98.69 % accuracy on the validation set, and 100% accuracy in identifying the stop signs in our test data.
3http://www.robots.ox.ac.uk/˜vgg/data/vgg_face/. 4External code that we use for transfering VGG-Face to Pytorch Framework is available at https://
github.com/prlz77/vgg-face.pytorch 5Default Pytorch Adam parameters stand for β1=0.9, β1=0.999 and =10−8
B EFFECTIVENESS OF CONVENTIONAL ROBUST ML METHODS AGAINST l∞ AND l2 ATTACKS
In this appendix, we show that adversarial training and randomized smoothing degrade more gracefully when faced with attacks that they are designed for. In particular, we consider here variants of projected gradient descent (PGD) for both the l∞ and l2 attacks Madry et al. (2018). In particular, the form of PGD for the l∞ attack is
xt+1 = Proj(xt + αsgn(∇L(xt; θ))),
where Proj is a projection operator which clips the result to be feasible, xt the adversarial example in iteration t, α the learning rate, andL(·) the loss function. In the case of an l2 attack, PGD becomes
xt+1 = Proj ( xt + α
∇L(xt; θ) ‖∇L(xt; θ)‖2
) ,
where the projection operator normalizes the perturbation δ = xt+1 − xt to have ‖δ‖2 ≤ if it doesn’t already Kolter & Madry (2019).
The experiments were done on the face recognition and traffic sign datasets, but unlike physical attacks on stop signs, we now consider adversarial perturbations to all sign images.
B.1 FACE RECOGNITION
We begin with our results on the face recognition dataset. Tables 1 and 2 present results for (curriculum) adversarial training for varying of the l∞ attacks, separately for training and evaluation. As we can see, curriculum adversarial training with = 16 is generally the most robust, and remains reasonably effective for relatively large perturbations. However, we do observe a clear tradeoff between accuracy on non-adversarial data and robustness, as one would expect.
Table 3 presents the results of using randomized smoothing on face recognition data, when facing the l2 attacks. Again, we observe a high level of robustness and, in most cases, relatively limited drop in performance, with σ = 0.5 perhaps striking the best balance.
B.2 TRAFFIC SIGN CLASSIFICATION
Tables 4 and 5 present evaluation on traffic sign data for curriculum adversarial training against the l∞ attack for varying . As with face recognition data, we can observe that the approaches tend to be relatively robust, and effective on non-adversarial data for adversarial training methods using < 32.
The results of randomized smoothing on traffic sign data are given in Table 6. Since images are smaller here than in VGGFace, lower values of for the l2 attacks are meaningful, and for ≤ 1 we
generally see robust performance on randomized smoothing, with σ = 0.5 providing a good balance between non-adversarial accuracy and robustness, just as before.
C EXAMPLES OF ROA
Figure 6 provides several examples of the ROA attack in the context of face recognition. Note that in these examples, the adversaries choose to occlude the noise on upper lip and eye areas of the image, and, indeed, this makes the face more challenging to recognize even to a human observer.
D DETAILED DESCRIPTION OF THE ALGORITHMS FOR COMPUTING THE RECTANGULAR OCCLUSION ATTACKS
Our basic algorithm for computing rectangular occlusion attacks (ROA) proceeds through the following two steps:
1. Iterate through possible positions for the rectangle’s upper left-hand corner point in the image. Find the position for a grey rectangle (RGB value =[127.5,127.5,127.5]) in the image that maximizes loss.
2. Generate high- l∞ noise inside the rectangle at the position computed in step 1.
Algorithm 1 presents the full algorithm for identifying the ROA position, which amounts to exhaustive search through the image pixel region. This algorithm has several parameters. First, we assume that images are squares with dimensions N2. Second, we introduce a stride parameter S. The purpose of this parameter is to make location computation faster by only considering every other Sth pixel during the search (in other words, we skip S pixels each time). For our implementation of ROA attacks, we choose the stride parameter S = 5 for face recognition and S = 2 for traffic sign classification.
Algorithm 1 Computation of ROA position using exhaustive search. Input: Data: Xi, yi; Test data shape: N ×N ; Target model parameters: θ; Stride: S ; Output: ROA Position: (j′, k′) 1.function ExhaustiveSearching(Model,Xi, yi, N, S) 2. for j in range(N/S) do: 3. for k in range(N/S) do: 4. Generate the adversarial Xadvi image by: 5. place a grey rectangle onto the image with top-left corner at (j × S, k × S); 6. if L(Xadvi , yi, θ) is higher than previous loss: 7. Update (j′, k′) = (j, k) 8. end for 9. end for 10. return (j′, k′)
Algorithm 2 Computation of ROA position using gradient-based search. Input: Data: Xi, yi; Test data shape: N ×N ; Target Model: θ; Stride: S ; Number of Potential Candidates: C;
Output: Best Sticker Position: (j′, k′)
1.function GradientBasedSearch(Xi, yi, N, S,C, θ) 2. Calculate the gradient∇L of Loss(Xi, yi, θ) w.r.t. Xi 3. J,K = HelperSearching(∇L,N, S,C) 4. for j, k in J,K do: 5. Generate the adversarial Xadvi image by: 6. put the sticker on the image where top-left corner at (j × S, k × S); 7. if Loss(Xadvi , yi, θ) is higher than previous loss: 8. Update (j′, k′) = (j, k) 9. end for 10. return (j′, k′)
1.function HelperSearching(∇L,N, S,C) 2. for j in range(N/S) do: 3. for k in range(N/S) do: 4. Calculate the Sensitivity value L = ∑ i∈rectangle(∇Li)2 where top-left corner at (j × S, k × S); 6. if the Sensitivity value L is in top C of previous values: 7. Put (j, k) in J,K and discard (js, ks) with lowest L 8. end for 9. end for 10. return J,K
Despite introducing the tunable stride parameter, the search for the best location for ROA still entails a large number of loss function evaluations, which are somewhat costly (since each such evaluation means a full forward pass through the deep neural network), and these costs add up quickly. To speed things up, we consider using the magnitude of the gradient of the loss as a measure of sensitivity of particular regions to manipulation. Specifically, suppose that we compute a gradient ∇L, and let ∇Li be the gradient value for a particular pixel i in the image. Now, we can iterate over the possible ROA locations, but for each location compute the gradient of the loss at that location corresponding to the rectangular region. We do this by adding squared gradient values (∇Li)2 over pixels i in the rectangle. We use this approach to find the top C candidate locations for the rectangle. Finally, we consider each of these, computing the actual loss for each location, to find the position of ROA. The full algorithm is provided as Algorithm 2.
Once we’ve found the place for the rectangle, our next step is to introduce adversarial noise inside it. For this, we use the l∞ version of the PGD attack, restricting perturbations to the rectangle. We used {7, 20, 30, 50} iterations of PGD to generate adversarial noise inside the rectangle, and with learning rate α = {32, 16, 8, 4} correspondingly.
Figure 8 offers a visual illustration of how gradient-based search compares to exhaustive search for computing ROA.
E DETAILS OF PHYSICALLY REALIZABLE ATTACKS
Physically realizable attacks that we study have a common feature: first, they specify a mask, which is typically precomputed, and subsequently introduce adversarial noise inside the mask area. Let M denote the mask matrix constraining the area of the perturbation δ; M has the same dimensions as the input image and contains 0s where no perturbation is allowed, and 1s in the area which can be perturbed. The physically realizable attacks we consider then solve an optimization problem of the following form:
argmax δ L(f(x+Mδ;θ), y). (4)
Next, we describe the details of the three physical attacks we consider in the main paper.
E.1 EYEGLASS FRAME ATTACKS ON FACE RECOGNITION
Following Sharif et al. (2016), we first initialized the eyeglass frame with 5 different colors, and chose the best starting color by calculating the cross-entropy loss. For each update step, we divided the gradient value by its maximum value before multiplying by the learning rate which is 20. Then we only kept the gradient value of eyeglass frame area. Finally, we clipped and rounded the pixel value to keep it in the valid range.
E.2 STICKER ATTACKS ON STOP SIGNS
Following Eykholt et al. (2018), we initialized the stickers on the stop signs with random noise. For each update step, we used the Adam optimizer with 0.1 learning rate and with default parameters. Just as for other attacks, adversarial perturbations were restricted to the mask area exogenously specified; in our case, we used the same mask as Eykholt et al. (2018)—a collection of small rectangles.
E.3 ADVERSARIAL PATCH ATTACK
We used gradient ascent to maximize the log probability of the targeted class P [ytarget|x], as in the original paper (Brown et al., 2018). When implementing the adversarial patch, we used a square patch rather than the circular patch in the original paper; we don’t anticipate this choice to be practically consequential. We randomly chose the position and direction of the patch, used the learning rate of 5, and fixed the number of attack iterations to 100 for each image. We varied the attack region (mask) R ∈ {0%, 5%, 10%, 15%, 20%, 25%}. For the face recognition dataset, we used 27 images (9 classes (without targeted class) × 3 images in each class) to design the patch, and then ran the attack over 20 epochs. For the smaller traffic sign dataset, we used 15 images (15 classes (without targeted class) × 1 image in each class) to design the patch, and then ran the attack over 5 epochs. Note that when evaluating the adversarial patch, we used the validation set without the targeted class images.
F.1 FACE RECOGNITION AND EYEGLASS FRAME ATTACK
F ADDITIONAL EXPERIMENTS WITH DOA
F.2 TRAFFIC SIGN CLASSIFICATION AND THE STOP SIGN ATTACK
F.3.1 FACE RECOGNITION
F.3 EVALUATION WITH THE ADVERSARIAL PATCH ATTACK
F.3.2 TRAFFIC SIGN CLASSIFICATION
G.2 TRAFFIC SIGN CLASSIFICATION
G.1 FACE RECOGNITION
G EXAMPLES OF PHYSICALLY REALIZABLE ATTACK AGAINST ALL DEFENSE MODELS
H EFFECTIVENESS OF DOA METHODS AGAINST l∞ ATTACKS
For completeness, this section includes evaluation of DOA in the context of l∞-bounded attacks implemented using PGD, though these are outside the scope of our threat model.
Table 23 presents results of several variants of DOA in the context of PGD attacks in the context of face recognition, while Table 24 considers these in traffic sign classification. The results are quite consistent with intuition: DOA is largely unhelpful against these attacks. The reason is that DOA fundamentally assumes that the attacker only modifies a relatively small proportion (∼5%) of the scene (and the resulting image), as otherwise the physical attack would be highly suspicious. l∞ bounded attacks, on the other hand, modify all pixels.
I EFFECTIVENESS OF DOA METHODS AGAINST l0 ATTACKS
In addition to considering physical attacks, we evaluate the effectiveness of DOA against Jacobianbased saliency map attacks (JSMA) (Papernot et al., 2015) for implementing l0-constraint adversarial examples. As Figure 13 shows, in both the face recognition and traffic sign classification, DOA is able to improve classificatio robustness compared to the original model.
Face Recognition Traffic Sign Classification
J EFFECTIVENESS OF DOA AGAINST OTHER MASK-BASED ATTACKS
To further illustrate the ability of DOA to generalize, we evaluate its effectiveness in the context of three additional occlusion patterns: a union of triangles and circle, a single larger triangle, and a heart pattern.
As the results in Figures 14 and 15 suggest, DOA is able to generalize successfully to a variety of physical attack patterns. It is particularly noteworthy that the larger patterns (large triangle—middle of the figure, and large heart—right of the figure) are actually quite suspicious (particularly the heart pattern), as they occupy a significant fraction of the image (the heart mask, for example, accounts for 8% of the face).
Mask1 Mask2 Mask3
Abbie Cornish Abbie Cornish Abbie Cornish
Mask1 Mask2 Mask3
Keep Right Keep Right Keep Right | 1. What is the main contribution of the paper regarding threat models for physically realizable attacks?
2. What are the strengths of the paper, particularly in its thorough experiments and careful distinction between physically realizable and realized attacks?
3. How does the reviewer assess the paper's limitation in testing the robustness of models trained on rectangular occlusions against non-rectangular attacks?
4. What is the reviewer's concern regarding the high-level claimed take-away about adversarial training not helping against physically realizable attacks?
5. How does the reviewer suggest improving the paper's language and tone to avoid subjective intensifiers and ensure clarity? | Review | Review
[Note: I gave a 3 for thoroughness even though I only read the paper once, because I believe that I carefully considered the paper while reading it.]
This paper argues that threat models such as L-inf are limited when considering physically realizable attacks, and provides evidence for this by showing that L-inf adversarial training is insufficient to fully confer robustness against physically realizable attacks in the literature such as the adversarial glasses attack. The paper then proposes an alternate threat model based on contiguous rectangular regions, and shows that adversarial training against this model does far better.
Overall this is a strong paper with thorough experiments, and was for the most part carefully written (although see some quibbles below). I particular liked the paper's carefully distinction between physically *realizable* and physically *realized* attacks, and the admission that not all realizable attacks would fall under the threat model (while still justifying why the model is interesting).
I am on the border of weak accept and strong accept. The main two points keeping me from strong accept are discussed below (and seem perhaps addressable by the authors):
1. The assertion that rectangular occlusions might be a fruitful model for realizable attacks is only lightly tested, since the attacks considered in this paper all fall into the rectangular occlusions threat model. Since one of the key points in the paper is that training in the L-inf threat model could leave vulnerabilities to non-Linf attacks, we might expect the same with rectangular occlusions. A more convincing demonstration would be to test robustness to physically realizable attacks that fall outside the rectangular model (perhaps even skewed or rotated rectangles, although a less synthetic example would be even better). A more minor point is that it would be nice to test the robustness of models trained on rectangular occlusions against L-inf adversarial attacks. This is relevant to knowing whether training on occlusions is giving up on adversarial robustness or actually helps against both (I actually think it's plausible that you would do well on L-inf, but it seems worth testing either way).
2. The high-level claimed take-away that adversarial training does not help against physically realizable attacks seems false in light of Figure 2, which shows that L-inf adversarial training substantially improves robustness relative to the baseline. A more accurate take-away would be that on some datasets adversarial training helps but still leaves a gap, while on others it does not help at all and perhaps hurts. I would prefer more careful wording as a reader who only goes through the introduction might not see Figure 2.
On #1, I should stress that the paper would still be interesting regardless of the outcome of the two experiments in #1; I just think that for thoroughness it would be nice to include them. The existing experiments are already thorough so this would be going above and beyond, but that is why it might help raise my score from weak to strong accept.
EDITED TO ADD: For #2, I would also be happy with an argument from the authors as to why the current language appropriately describes the experiments. I do not wish to dictate to the authors what their take-aways are, but more to open up for discussion a point that seemed slightly sloppy to me.
Minor comments:
Please avoid subjective intensifiers: "We then use an extensive experimental evaluation to demonstrate that our proposed approach is far more robust against physical attacks on deep neural networks than adversarial training and randomized smoothing methods that leverage lp-based attack models." Both "extensive" and "far" are unnecessary.
Make sure to use \citep vs \citet correctly.
First sentence of 2.1 is too verbose. Overall the prose in that paragraph is turgid, due to too many action phrases being turned into nouns. E.g. "The focus is on", "The typical goal is" both indicate *action* and could be profitably turned into verbs. See Williams and Bizup's book on Style.
"since our ultimate goal is to defend against physical attacks, untargeted attacks that aim to maximize error are the most useful" This seems weak; I don't understand why an attack being physical should go in line with it being untargeted; oftentimes an attacker will have a specific targeted goal.
"another advantage of the ROA attack is that it is, in principle,easier to compute than, say,l∞-bounded attacks" This seems incongruous with the subsequent text, which admits that ROA if implemented naively would be slower than L-inf attacks. |
ICLR | Title
Defending Against Physically Realizable Attacks on Image Classification
Abstract
We study the problem of defending deep neural network approaches for image classification from physically realizable attacks. First, we demonstrate that the two most scalable and effective methods for learning robust models, adversarial training with PGD attacks and randomized smoothing, exhibit limited effectiveness against three of the highest profile physical attacks. Next, we propose a new abstract adversarial model, rectangular occlusion attacks, in which an adversary places a small adversarially crafted rectangle in an image, and develop two approaches for efficiently computing the resulting adversarial examples. Finally, we demonstrate that adversarial training using our new attack yields image classification models that exhibit high robustness against the physically realizable attacks we study, offering the first effective generic defense against such attacks. 1
1 INTRODUCTION
State-of-the-art effectiveness of deep neural networks has made it the technique of choice in a variety of fields, including computer vision (He et al., 2016), natural language processing (Sutskever et al., 2014), and speech recognition (Hinton et al., 2012). However, there have been a myriad of demonstrations showing that deep neural networks can be easily fooled by carefully perturbing pixels in an image through what have become known as adversarial example attacks (Szegedy et al., 2014; Goodfellow et al., 2015; Carlini & Wagner, 2017b; Vorobeychik & Kantarcioglu, 2018). In response, a large literature has emerged on defending deep neural networks against adversarial examples, typically either proposing techniques for learning more robust neural network models (Wong & Kolter, 2018; Wong et al., 2018; Raghunathan et al., 2018b; Cohen et al., 2019; Madry et al., 2018), or by detecting adversarial inputs (Metzen et al., 2017; Xu et al., 2018).
Particularly concerning, however, have been a number of demonstrations that implement adversarial perturbations directly in physical objects that are subsequently captured by a camera, and then fed through the deep neural network classifier (Boloor et al., 2019; Eykholt et al., 2018; Athalye et al., 2018b; Brown et al., 2018; Bhattad et al., 2020). Among the most significant of such physical attacks on deep neural networks are three that we specifically consider here: 1) the attack which fools face recognition by using adversarially designed eyeglass frames (Sharif et al., 2016), 2) the attack which fools stop sign classification by adding adversarially crafted stickers (Eykholt et al., 2018), and 3) the universal adversarial patch attack, which causes targeted misclassification of any object with the adversarially designed sticker (patch) (Brown et al., 2018). Oddly, while considerable attention has been devoted to defending against adversarial perturbation attacks in the digital space, there are no effective methods specifically to defend against such physical attacks.
Our first contribution is an empirical evaluation of the effectiveness of conventional approaches to robust ML against two physically realizable attacks: the eyeglass frame attack on face recognition (Sharif et al., 2016) and the sticker attack on stop signs (Eykholt et al., 2018). Specifically, we study the performance on adversarial training and randomized smoothing against these attacks, and show that both have limited effectiveness in this context (quite ineffective in some settings, and somewhat more effective, but still not highly robust, in others), despite showing moderate effectiveness against l∞ and l2 attacks, respectively.
1The code can be found at https://github.com/tongwu2020/phattacks
Our second contribution is a novel abstract attack model which more directly captures the nature of common physically realizable attacks than the conventional lp-based models. Specifically, we consider a simple class of rectangular occlusion attacks in which the attacker places a rectangular sticker onto an image, with both the location and the content of the sticker adversarially chosen. We develop several algorithms for computing such adversarial occlusions, and use adversarial training to obtain neural network models that are robust to these. We then experimentally demonstrate that our proposed approach is significantly more robust against physical attacks on deep neural networks than adversarial training and randomized smoothing methods that leverage lp-based attack models.
Related Work While many approaches for defending deep learning in vision applications have been proposed, robust learning methods have been particularly promising, since alternatives are often defeated soon after being proposed (Madry et al., 2018; Raghunathan et al., 2018a; Wong & Kolter, 2018; Vorobeychik & Kantarcioglu, 2018). The standard solution approach for this problem is an adaptation of Stochastic Gradient Descent (SGD) where gradients are either with respect to the loss at the optimal adversarial perturbation for each i (or approximation thereof, such as using heuristic local search (Goodfellow et al., 2015; Madry et al., 2018) or a convex over-approximation (Raghunathan et al., 2018b; Wang et al., 2018)), or with respect to the dual of the convex relaxation of the attacker maximization problem (Raghunathan et al., 2018a; Wong & Kolter, 2018; Wong et al., 2018). Despite these advances, adversarial training a la Madry et al. (2018) remains the most practically effective method for hardening neural networks against adversarial examples with l∞-norm perturbation constraints. Recently, randomized smoothing emerged as another class of techniques for obtaining robustness (Lecuyer et al., 2019; Cohen et al., 2019), with the strongest results in the context of l2-norm attacks. In addition to training neural networks that are robust by construction, a number of methods study the problem of detecting adversarial examples (Metzen et al., 2017; Xu et al., 2018), with mixed results (Carlini & Wagner, 2017a). Of particular interest is recent work on detecting physical adversarial examples (Chou et al., 2018). However, detection is inherently weaker than robustness, which is our goal, as even perfect detection does not resolve the question of how to make decisions on adversarial examples. Finally, our work is in the spirit of other recent efforts that characterize robustness of neural networks to physically realistic perturbations, such as translations, rotations, blurring, and contrast (Engstrom et al., 2019; Hendrycks & Dietterich, 2019).
2 BACKGROUND
2.1 ADVERSARIAL EXAMPLES IN THE DIGITAL AND PHYSICAL WORLD
Adversarial examples involve modifications of input images that are either invisible to humans, or unsuspicious, and that cause systematic misclassification by state-of-the-art neural networks (Szegedy et al., 2014; Goodfellow et al., 2015; Vorobeychik & Kantarcioglu, 2018). Commonly, approaches for generating adversarial examples aim to solve an optimization problem of the following form:
argmax δ L(f(x+ δ;θ), y) s.t. ‖δ‖p ≤ , (1)
where x is the original input image, δ is the adversarial perturbation, L(·) is the adversary’s utility function (for example, the adversary may wish to maximize the cross-entropy loss), and ‖ · ‖p is some lp norm. While a host of such digital attacks have been proposed, two have come to be viewed as state of the art: the attack developed by Carlini & Wagner (2017b), and the projected gradient descent attack (PGD) by Madry et al. (2018).
While most of the work to date has been on attacks which modify the digital image directly, we focus on a class of physical attacks which entail modifying the actual object being photographed in order to fool the neural network that subsequently takes its digital representation as input. The attacks we will focus on will have three characteristics:
1. The attack can be implemented in the physical space (e.g., modifying the stop sign); 2. the attack has low suspiciousness; this is operationalized by modifying only a small part of
the object, with the modification similar to common “noise” that obtains in the real world; for example, stickers on a stop sign would appear to most people as vandalism, but covering the stop sign with a printed poster would look highly suspicious; and
3. the attack causes misclassification by state-of-the-art deep neural network.
Since our ultimate purpose is defense, we will not concern ourselves with the issue of actually implementing the physical attacks. Instead, we will consider the digital representation of these attacks, ignoring other important issues, such as robustness to many viewpoints and printability. For example, in the case where the attack involves posting stickers on a stop sign, we will only be concerned with simulating such stickers on digital images of stop signs. For this reason, we refer to such attacks physically realizable attacks, to allude to the fact that it is possible to realize them in practice. It is evident that physically realizable attacks represent a somewhat stronger adversarial model than their actual implementation in the physical space. Henceforth, for simplicity, we will use the terms physical attacks and physically realizable attacks interchangeably.
We consider three physically realizable attacks. The first is the attack on face recognition by Sharif et al. (2016), in which the attacker adds adversarial noise inside printed eyeglass frames that can subsequently be put on to fool the deep neural network (Figure 1a). The second attack posts adversarially crafted stickers on a stop sign to cause it to be misclassified as another road sign, such as the speed limit sign (Figure 1b) (Eykholt et al., 2018). The third, adversarial patch, attack designs a patch (a sticker) with adversarial noise that can be placed onto an arbitrary object, causing that object to be misclassified by a deep neural network (Brown et al., 2018).
2.2 ADVERSARIALLY ROBUST DEEP LEARNING
While numerous approaches have been proposed for making deep learning robust, many are heuristic and have soon after been defeated by more sophisticated attacks (Carlini & Wagner, 2017b; He et al., 2017; Carlini & Wagner, 2017a; Athalye et al., 2018a). Consequently, we focus on principled approaches for defense that have not been broken. These fall broadly into two categories: robust learning and randomized smoothing. We focus on a state-of-the-art representative from each class.
Robust Learning The goal of robust learning is to minimize a robust loss, defined as follows:
θ∗ = argmin θ E (x,y)∼D [ max ‖δ‖p≤ L(f(x+ δ;θ), y) ] , (2)
where D denotes the training data set. In itself this is a highly intractable problem. Several techniques have been developed to obtain approximate solutions. Among the most effective in practice is the adversarial training approach by Madry et al. (2018), who use the PGD attack as an approximation to the inner optimization problem, and then take gradient descent steps with respect to the associated adversarial inputs. In addition, we consider a modified version of this approach termed curriculum adversarial training (Cai et al., 2018). Our implementation of this approach proceeds as follows: first, apply adversarial training for a small , then increase and repeat adversarial training, and so on, increasing until we reach the desired level of adversarial noise we wish to be robust to.
Randomized Smoothing The second class of techniques we consider works by adding noise to inputs at both training and prediction time. The key idea is to construct a smoothed classifier g(·) from a base classifier f(·) by perturbing the input x with isotropic Gaussian noise with variance σ. The prediction is then made by choosing a class with the highest probability measure with respect to the induced distribution of f(·) decisions:
g(x) = argmax c P (f(x+ σ) = c), σ ∼ N
( 0, σ2I ) . (3)
To achieve provably robust classification in this manner one typically trains the classifier f(·) by adding Gaussian noise to inputs at training time (Lecuyer et al., 2019; Cohen et al., 2019).
3 ROBUSTNESS OF CONVENTIONAL ROBUST ML METHODS AGAINST PHYSICAL ATTACKS
Most of the approaches for endowing deep learning with adversarial robustness focus on adversarial models in which the attacker introduces lp-bounded adversarial perturbations over the entire input. Earlier we described two representative approaches in this vein: adversarial training, commonly focused on robustness against l∞ attacks, and randomized smoothing, which is most effective against l2 attacks (although certification bounds can be extended to other lp norms as well). We call these methods conventional robust ML.
In this section, we ask the following question:
Are conventional robust ML methods robust against physically realizable attacks?
This is similar to the question was asked in the context of malware classifier evasion by Tong et al. (2019), who found that lp-based robust ML methods can indeed be successful in achieving robustness against realizable evasion attacks. Ours is the first investigation of this issue in computer vision applications and for deep neural networks, where attacks involve adversarial masking of objects.2
We study this issue experimentally by considering two state-of-the-art approaches for robust ML: adversarial training a-la-Madry et al. (2018), along with its curriculum learning variation (Cai et al., 2018), and randomized smoothing, using the implementation by Cohen et al. (2019). These approaches are applied to defend against two physically realizable attacks described in Section 2.1: an attack on face recognition which adds adversarial eyeglass frames to faces (Sharif et al., 2016), and an attack on stop sign classification which adds adversarial stickers to a stop sign to cause misclassification (Eykholt et al., 2018).
We consider several variations of adversarial training, as a function of the l∞ bound, , imposed on the adversary. Just as Madry et al. (2018), adversarial instances in adversarial training were generated using PGD. We consider attacks with ∈ {4, 8} (adversarial training failed to make progress when we used = 16). For curriculum adversarial training, we first performed adversarial training with = 4, then doubled to 8 and repeated adversarial training with the model robust to = 4, then doubled again, and so on. In the end, we learned models for ∈ {4, 8, 16, 32}. For all versions of adversarial training, we consider 7 and 50 iterations of the PGD attack. We used the learning rate of /4 for the former and 1 for the latter. In all cases, pixels are in 0 ∼ 255 range and retraining was performed for 30 epochs using the ADAM optimizer.
For randomized smoothing, we consider noise levels σ ∈ {0.25, 0.5, 1} as in Cohen et al. (2019), and take 1,000 Monte Carlo samples at test time.
3.1 ADVERSARIAL EYEGLASSES IN FACE RECOGNITION
We applied white-box dodging (untargeted) attacks on the face recognition systems (FRS) from Sharif et al. (2016). We used both the VGGFace data and transferred VGGFace CNN model for the face recognition task, subselecting 10 individuals, with 300-500 face images for each. Further details about the dataset, CNN architecture, and training procedure are in Appendix A. For the attack, we used identical frames as in Sharif et al. (2016) occupying 6.5% of the pixels. Just as Sharif et al. (2016), we compute attacks (that is, adversarial perturbations inside the eyeglass frame area) by using the learning rate 20 as well as momentum value 0.4, and vary the number of attack iterations between 0 (no attack) and 300.
Figure 2 presents the results of classifiers obtained from adversarial training (left) as well as curriculum adversarial training (middle), in terms of accuracy (after the attack) as a function of the number of iterations of the Sharif et al. (2016) eyeglass frame attack. First, it is clear that none of the variations of adversarial training are particularly effective once the number of physical attack iterations is
2Several related efforts study robustness of deep neural networks to other variants of physically realistic perturbations (Engstrom et al., 2019; Hendrycks & Dietterich, 2019).
above 20. The best performance in terms of adversarial robustness is achieved by adversarial training with = 8, for approaches using either 7 or 50 PGD iterations (the difference between these appears negligible). However, non-adversarial accuracy for these models is below 70%, a ∼20% drop in accuracy compared to the original model. Moreover, adversarial accuracy is under 40% for sufficiently strong physical attacks. Curriculum adversarial training generally achieves significantly higher non-adversarial accuracy, but is far less robust, even when trained with PGD attacks that use = 32.
Figure 2 (right) shows the performance of randomized smoothing when faced with the eyeglass frames attack. It is readily apparent that randomized smoothing is ineffective at deflecting this physical attack: even as we vary the amount of noise we add, accuracy after attacks is below 20% even for relatively weak attacks, and often drops to nearly 0 for sufficiently strong attacks.
3.2 ADVERSARIAL STICKERS ON STOP SIGNS
Following Eykholt et al. (2018), we use the LISA traffic sign dataset for our experiments, and 40 stop signs from this dataset as our test data and perform untargeted attacks (this is in contrast to the original work, which is focused on targeted attacks). For the detailed description of the data and the CNN used for traffic sign prediction, see Appendix A. We apply the same settings as in the original attacks and use ADAM optimizer with the same parameters. Since we observed few differences in performance between running PGD for 7 vs. 50 iterations, adversarial training methods in this section all use 7 iterations of PGD.
Again, we begin by considering adversarial training (Figure 3, left and middle). In this case, both the original and curriculum versions of adversarial training with PGD are ineffective when = 32 (error rates on clean data are above 90%); these are consequently omitted from the plots. Curriculum adversarial training with = 16 has the best performance on adversarial data, and works well on clean data. Surprisingly, most variants of adversarial training perform at best marginally better than the original model against the stop sign attack. Even the best variant has relatively poor performance, with robust accuracy under 50% for stronger attacks.
Figure 3 (right) presents the results for randomized smoothing. In this set of experiments, we found that randomized smoothing performs inconsistently. To address this, we used 5 random seeds to repeat the experiments, and use the resulting mean values in the final results. Here, the best variant
uses σ = 0.25, and, unlike experiments with the eyeglass frame attack, significantly outperforms adversarial training, reaching accuracy slightly above 60% even for the stronger attacks. Nevertheless, even randomized smoothing results in significant degradation of effectiveness on adversarial instances (nearly 40%, compared to clean data).
3.3 DISCUSSION
There are two possible reasons why conventional robust ML perform poorly against physical attacks: 1) adversarial models involving lp-bounded perturbations are too hard to enable effective robust learning, and 2) the conventional attack model is too much of a mismatch for realistic physical attacks. In Appendix B, we present evidence supporting the latter. Specifically, we find that conventional robust ML models exhibit much higher robustness when faced with the lp-bounded attacks they are trained to be robust to.
4 PROPOSED APPROACH: DEFENSE AGAINST OCCLUSION ATTACKS (DOA)
As we observed in Section 3, conventional models for making deep learning robust to attack can perform quite poorly when confronted with physically realizable attacks. In other words, the evidence strongly suggests that the conventional models of attacks in which attackers can make lp-bounded perturbations to input images are not particularly useful if one is concerned with the main physical threats that are likely to be faced in practice. However, given the diversity of possible physical attacks one may perpetrate, is it even possible to have a meaningful approach for ensuring robustness against a broad range of physical attacks? For example, the two attacks we considered so far couldn’t be more dissimilar: in one, we engineer eyeglass frames; in another, stickers on a stop sign. We observe that the key common element in these attacks, and many other physical attacks we may expect to encounter, is that they involve the introduction of adversarial occlusions to a part of the input. The common constraint faced in such attacks is to avoid being suspicious, which effectively limits the size of the adversarial occlusion, but not necessarily its shape or location. Next, we introduce a simple abstract model of occlusion attacks, and then discuss how such attacks can be computed and how we can make classifiers robust to them.
4.1 ABSTRACT ATTACK MODEL: RECTANGULAR OCCLUSION ATTACKS (ROA)
We propose the following simple abstract model of adversarial occlusions of input images. The attacker introduces a fixed-dimension rectangle. This rectangle can be placed by the adversary anywhere in the image, and the attacker can furthermore introduce l∞ noise inside the rectangle with an exogenously specified high bound (for example, = 255, which effectively allows addition of arbitrary adversarial noise). This model bears some similarity to l0 attacks, but the rectangle imposes a contiguity constraint, which reflects common physical limitations. The model is clearly abstract: in practice, for example, adversarial occlusions need not be rectangular or have fixed dimensions (for example, the eyeglass frame attack is clearly not rectangular), but at the same time cannot usually be arbitrarily superimposed on an image, as they are implemented in the physical environment. Nevertheless, the model reflects some of the most important aspects common to many physical attacks, such as stickers placed on an adversarially chosen portion of the object we wish to identify. We call our attack model a rectangular occlusion attack (ROA). An important feature of this attack is that it is untargeted: since our ultimate goal is to defend against physical attacks whatever their target, considering untargeted attacks obviates the need to have precise knowledge about the attacker’s goals. For illustrations of the ROA attack, see Appendix C.
4.2 COMPUTING ATTACKS
The computation of ROA attacks involves 1) identifying a region to place the rectangle in the image, and 2) generating fine-grained adversarial perturbations restricted to this region. The former task can be done by an exhaustive search: consider all possible locations for the upper left-hand corner of the rectangle, compute adversarial noise inside the rectangle using PGD for each of these, and choose the worst-case attack (i.e., the attack which maximizes loss computed on the resulting image). However, this approach would be quite slow, since we need to perform PGD inside the rectangle for every possible position. Our approach, consequently, decouples these two tasks. Specifically, we first
perform an exhaustive search using a grey rectangle to find a position for it that maximizes loss, and then fix the position and apply PGD inside the rectangle.
An important limitation of the exhaustive search approach for ROA location is that it necessitates computations of the loss function for every possible location, which itself requires full forward propagation each time. Thus, the search itself is still relatively slow. To speed the process up further, we use the gradient of the input image to identify candidate locations. Specifically, we select a subset of C locations for the sticker with the highest magnitude of the gradient, and only exhaustively search among these C locations. C is exogenously specified to be small relative to the number of pixels in the image, which significantly limits the number of loss function evaluations. Full details of our algorithms for computing ROA are provided in Appendix D.
4.3 DEFENDING AGAINST ROA
Once we are able to compute the ROA attack, we apply the standard adversarial training approach for defense. We term the resulting classifiers robust to our abstract adversarial occlusion attacks Defense against Occlusion Attacks (DOA), and propose these as an alternative to conventional robust ML for defending against physical attacks. As we will see presently, this defense against ROA is quite adequate for our purposes.
5 EFFECTIVENESS OF DOA AGAINST PHYSICALLY REALIZABLE ATTACKS
We now evaluate the effectiveness of DOA—that is, adversarial training using the ROA threat model we introduced—against physically realizable attacks (see Appendix G for some examples that defeat conventional methods but not DOA). Recall that we consider only digital representations of the corresponding physical attacks. Consequently, we can view our results in this section as a lower bound on robustness to actual physical attacks, which have to deal with additional practical constraints, such as being robust to multiple viewpoints. In addition to the two physical attacks we previously considered, we also evaluate DOA against the adversarial patch attack, implemented on both face recognition and traffic sign data.
5.1 DOA AGAINST ADVERSARIAL EYEGLASSES
We consider two rectangle dimensions resulting in comparable area: 100 × 50 and 70 × 70, both in pixels. Thus, the rectangles occupy approximately 10% of the 224 × 224 face images. We used {30, 50} iterations of PGD with = 255/2 to generate adversarial noise inside the rectangle, and with learning rate α = {8, 4} correspondingly. For the gradient version of ROA, we chooseC = 30. DOA adversarial training is performed for 5 epochs with a learning rate of 0.0001.
Figure 4 (left) presents the results comparing the effectiveness of DOA against the eyeglass frame attack on face recognition to adversarial training and randomized smoothing (we took the most robust variants of both of these). We can see that DOA yields significantly more robust classifiers for this domain. The gradient-based heuristic does come at some cost, with performance slightly worse
than when we use exhaustive search, but this performance drop is relatively small, and the result is still far better than conventional robust ML approaches. Figure 4 (middle and right) compares the performance of DOA between two rectangle variants with different dimensions. The key observation is that as long as we use enough iterations of PGD inside the rectangle, changing its dimensions (keeping the area roughly constant) appears to have minimal impact.
5.2 DOA AGAINST THE STOP SIGN ATTACK
We now repeat the evaluation with the traffic sign data and the stop sign attack. In this case, we used 10× 5 and 7× 7 rectangles covering ∼5 % of the 32× 32 images. We set C = 10 for the gradientbased ROA. Implementation of DOA is otherwise identical as in the face recognition experiments above.
We present our results using the square rectangle, which in this case was significantly more effective; the results for the 10 × 5 rectangle DOA attacks are in Appendix F. Figure 5 (left) compares the effectiveness of DOA against the stop sign attack on traffic sign data with the best variants of adversarial training and randomized smoothing. Our results here are for 30 iterations of PGD; in Appendix F, we study the impact of varying the number of PGD iterations. We can observe that DOA is again significantly more robust, with robust accuracy over 90% for the exhaustive search variant, and ∼85% for the gradient-based variant, even for stronger attacks. Moreover, DOA remains 100% effective at classifying stop signs on clean data, and exhibits ∼95% accuracy on the full traffic sign classification task.
5.3 DOA AGAINST ADVERSARIAL PATCH ATTACKS
Finally, we evaluate DOA against the adversarial patch attacks. In these attacks, an adversarial patch (e.g., sticker) is designed to be placed on an object with the goal of inducing a target prediction. We study this in both face recognition and traffic sign classification tasks. Here, we present the results for face recognition; further detailed results on both datasets are provided in Appendix F.
As we can see from Figure 5 (right), adversarial patch attacks are quite effective once the attack region (fraction of the image) is 10% or higher, with adversarial training and randomized smoothing both performing rather poorly. In contrast, DOA remains highly robust even when the adversarial patch covers 20% of the image.
6 CONCLUSION
As we have shown, conventional methods for making deep learning approaches for image classification robust to physically realizable attacks tend to be relatively ineffective. In contrast, a new threat model we proposed, rectangular occlusion attacks (ROA), coupled with adversarial training, achieves high robustness against several prominent examples of physical attacks. While we explored a number of variations of ROA attacks as a means to achieve robustness against physical attacks, numerous questions remain. For example, can we develop effective methods to certify robustness against ROA, and are the resulting approaches as effective in practice as our method based on a combination of heuristically computed attacks and adversarial training? Are there other types of occlusions that are more effective? Answers to these and related questions may prove a promising
path towards practical robustness of deep learning when deployed for downstream applications of computer vision such as autonomous driving and face recognition.
ACKNOWLEDGMENTS
This work was partially supported by the NSF (IIS-1905558, IIS-1903207), ARO (W911NF-19-10241), and NVIDIA.
A DESCRIPTION OF DATASETS AND DEEP LEARNING CLASSIFIERS
A.1 FACE RECOGNITION
The VGGFace dataset3 (Parkhi et al., 2015) is a benchmark for face recognition, containing 2622 subjusts with 2.6 million images in total. We chose ten subjects: A. J. Buckley, A. R. Rahman, Aamir Khan, Aaron Staton, Aaron Tveit, Aaron Yoo, Abbie Cornish, Abel Ferrara, Abigail Breslin, and Abigail Spencer, and subselected face images pertaining only to these individuals. Since approximately half of the images cannot be downloaded, our final dataset contains 300-500 images for each subject.
We used the standard corp-and-resize method to process the data to be 224 × 224 pixels, and split the dataset into training, validation, and test according to a 7:2:1 ratio for each subject. In total, the data set has 3178 images in the training set, 922 images in the validation set, and 470 images in the test set.
We use the VGGFace convolutional neural network (Parkhi et al., 2015) model, a variant of the VGG16 model containing 5 convolutional layer blocks and 3 fully connected layers. We make use of standard transfer learning as we only classify 10 subjects, keeping the convolutional layers as same as VGGFace structure,4 but changing the fully connected layer to be 1024 → 1024 →10 instead of 4096→ 4096→2622. Specifically, in our Pytorch implementation, we convert the images from RGB to BGR channel orders and subtract the mean value [129.1863, 104.7624, 93.5940] in order to use the pretrained weights from VGG-Face on convolutional layers. We set the batch size to be 64 and use Pytorch built-in Adam Optimizer with an initial learning rate of 10−4 and default parameters in Pytorch.5 We drop the learning rate by 0.1 every 10 epochs. Additionally, we used validation set accuracy to keep track of model performance and choose a model in case of overfitting. After 30 epochs of training, the model successfully obtains 98.94 % on test data.
A.2 TRAFFIC SIGN CLASSIFICATION
To be consistent with (Eykholt et al., 2018), we select the subset of LISA which contains 47 different U.S. traffic signs (Møgelmose et al., 2012). To alleviate the problem of imbalance and extremely blurry data, we picked 16 best quality signs with 3509 training and 1148 validation data points. From the validation data, we obtain the test data that includes only 40 stop signs to evaluate performance with respect to the stop sign attack, as done by Eykholt et al. (2018). In the main body of the paper, we present results only on this test data to evaluate robustness to stop sign attacks. In the appendix below, we also include performance on the full validation set without adversarial manipulation.
All the data was processed by standard crop-and-resize to 32× 32 pixels. We use the LISA-CNN architecture defined in (Eykholt et al., 2018), and construct a convolutional neural network containing three convolutional layers and one fully connected layer. We use the Adam Optimizer with initial learning rate of 10−1 and default parameters 5, dropping the learning rate by 0.1 every 10 epochs. We set the batch size to be 128. After 30 epochs, we achieve the 98.69 % accuracy on the validation set, and 100% accuracy in identifying the stop signs in our test data.
3http://www.robots.ox.ac.uk/˜vgg/data/vgg_face/. 4External code that we use for transfering VGG-Face to Pytorch Framework is available at https://
github.com/prlz77/vgg-face.pytorch 5Default Pytorch Adam parameters stand for β1=0.9, β1=0.999 and =10−8
B EFFECTIVENESS OF CONVENTIONAL ROBUST ML METHODS AGAINST l∞ AND l2 ATTACKS
In this appendix, we show that adversarial training and randomized smoothing degrade more gracefully when faced with attacks that they are designed for. In particular, we consider here variants of projected gradient descent (PGD) for both the l∞ and l2 attacks Madry et al. (2018). In particular, the form of PGD for the l∞ attack is
xt+1 = Proj(xt + αsgn(∇L(xt; θ))),
where Proj is a projection operator which clips the result to be feasible, xt the adversarial example in iteration t, α the learning rate, andL(·) the loss function. In the case of an l2 attack, PGD becomes
xt+1 = Proj ( xt + α
∇L(xt; θ) ‖∇L(xt; θ)‖2
) ,
where the projection operator normalizes the perturbation δ = xt+1 − xt to have ‖δ‖2 ≤ if it doesn’t already Kolter & Madry (2019).
The experiments were done on the face recognition and traffic sign datasets, but unlike physical attacks on stop signs, we now consider adversarial perturbations to all sign images.
B.1 FACE RECOGNITION
We begin with our results on the face recognition dataset. Tables 1 and 2 present results for (curriculum) adversarial training for varying of the l∞ attacks, separately for training and evaluation. As we can see, curriculum adversarial training with = 16 is generally the most robust, and remains reasonably effective for relatively large perturbations. However, we do observe a clear tradeoff between accuracy on non-adversarial data and robustness, as one would expect.
Table 3 presents the results of using randomized smoothing on face recognition data, when facing the l2 attacks. Again, we observe a high level of robustness and, in most cases, relatively limited drop in performance, with σ = 0.5 perhaps striking the best balance.
B.2 TRAFFIC SIGN CLASSIFICATION
Tables 4 and 5 present evaluation on traffic sign data for curriculum adversarial training against the l∞ attack for varying . As with face recognition data, we can observe that the approaches tend to be relatively robust, and effective on non-adversarial data for adversarial training methods using < 32.
The results of randomized smoothing on traffic sign data are given in Table 6. Since images are smaller here than in VGGFace, lower values of for the l2 attacks are meaningful, and for ≤ 1 we
generally see robust performance on randomized smoothing, with σ = 0.5 providing a good balance between non-adversarial accuracy and robustness, just as before.
C EXAMPLES OF ROA
Figure 6 provides several examples of the ROA attack in the context of face recognition. Note that in these examples, the adversaries choose to occlude the noise on upper lip and eye areas of the image, and, indeed, this makes the face more challenging to recognize even to a human observer.
D DETAILED DESCRIPTION OF THE ALGORITHMS FOR COMPUTING THE RECTANGULAR OCCLUSION ATTACKS
Our basic algorithm for computing rectangular occlusion attacks (ROA) proceeds through the following two steps:
1. Iterate through possible positions for the rectangle’s upper left-hand corner point in the image. Find the position for a grey rectangle (RGB value =[127.5,127.5,127.5]) in the image that maximizes loss.
2. Generate high- l∞ noise inside the rectangle at the position computed in step 1.
Algorithm 1 presents the full algorithm for identifying the ROA position, which amounts to exhaustive search through the image pixel region. This algorithm has several parameters. First, we assume that images are squares with dimensions N2. Second, we introduce a stride parameter S. The purpose of this parameter is to make location computation faster by only considering every other Sth pixel during the search (in other words, we skip S pixels each time). For our implementation of ROA attacks, we choose the stride parameter S = 5 for face recognition and S = 2 for traffic sign classification.
Algorithm 1 Computation of ROA position using exhaustive search. Input: Data: Xi, yi; Test data shape: N ×N ; Target model parameters: θ; Stride: S ; Output: ROA Position: (j′, k′) 1.function ExhaustiveSearching(Model,Xi, yi, N, S) 2. for j in range(N/S) do: 3. for k in range(N/S) do: 4. Generate the adversarial Xadvi image by: 5. place a grey rectangle onto the image with top-left corner at (j × S, k × S); 6. if L(Xadvi , yi, θ) is higher than previous loss: 7. Update (j′, k′) = (j, k) 8. end for 9. end for 10. return (j′, k′)
Algorithm 2 Computation of ROA position using gradient-based search. Input: Data: Xi, yi; Test data shape: N ×N ; Target Model: θ; Stride: S ; Number of Potential Candidates: C;
Output: Best Sticker Position: (j′, k′)
1.function GradientBasedSearch(Xi, yi, N, S,C, θ) 2. Calculate the gradient∇L of Loss(Xi, yi, θ) w.r.t. Xi 3. J,K = HelperSearching(∇L,N, S,C) 4. for j, k in J,K do: 5. Generate the adversarial Xadvi image by: 6. put the sticker on the image where top-left corner at (j × S, k × S); 7. if Loss(Xadvi , yi, θ) is higher than previous loss: 8. Update (j′, k′) = (j, k) 9. end for 10. return (j′, k′)
1.function HelperSearching(∇L,N, S,C) 2. for j in range(N/S) do: 3. for k in range(N/S) do: 4. Calculate the Sensitivity value L = ∑ i∈rectangle(∇Li)2 where top-left corner at (j × S, k × S); 6. if the Sensitivity value L is in top C of previous values: 7. Put (j, k) in J,K and discard (js, ks) with lowest L 8. end for 9. end for 10. return J,K
Despite introducing the tunable stride parameter, the search for the best location for ROA still entails a large number of loss function evaluations, which are somewhat costly (since each such evaluation means a full forward pass through the deep neural network), and these costs add up quickly. To speed things up, we consider using the magnitude of the gradient of the loss as a measure of sensitivity of particular regions to manipulation. Specifically, suppose that we compute a gradient ∇L, and let ∇Li be the gradient value for a particular pixel i in the image. Now, we can iterate over the possible ROA locations, but for each location compute the gradient of the loss at that location corresponding to the rectangular region. We do this by adding squared gradient values (∇Li)2 over pixels i in the rectangle. We use this approach to find the top C candidate locations for the rectangle. Finally, we consider each of these, computing the actual loss for each location, to find the position of ROA. The full algorithm is provided as Algorithm 2.
Once we’ve found the place for the rectangle, our next step is to introduce adversarial noise inside it. For this, we use the l∞ version of the PGD attack, restricting perturbations to the rectangle. We used {7, 20, 30, 50} iterations of PGD to generate adversarial noise inside the rectangle, and with learning rate α = {32, 16, 8, 4} correspondingly.
Figure 8 offers a visual illustration of how gradient-based search compares to exhaustive search for computing ROA.
E DETAILS OF PHYSICALLY REALIZABLE ATTACKS
Physically realizable attacks that we study have a common feature: first, they specify a mask, which is typically precomputed, and subsequently introduce adversarial noise inside the mask area. Let M denote the mask matrix constraining the area of the perturbation δ; M has the same dimensions as the input image and contains 0s where no perturbation is allowed, and 1s in the area which can be perturbed. The physically realizable attacks we consider then solve an optimization problem of the following form:
argmax δ L(f(x+Mδ;θ), y). (4)
Next, we describe the details of the three physical attacks we consider in the main paper.
E.1 EYEGLASS FRAME ATTACKS ON FACE RECOGNITION
Following Sharif et al. (2016), we first initialized the eyeglass frame with 5 different colors, and chose the best starting color by calculating the cross-entropy loss. For each update step, we divided the gradient value by its maximum value before multiplying by the learning rate which is 20. Then we only kept the gradient value of eyeglass frame area. Finally, we clipped and rounded the pixel value to keep it in the valid range.
E.2 STICKER ATTACKS ON STOP SIGNS
Following Eykholt et al. (2018), we initialized the stickers on the stop signs with random noise. For each update step, we used the Adam optimizer with 0.1 learning rate and with default parameters. Just as for other attacks, adversarial perturbations were restricted to the mask area exogenously specified; in our case, we used the same mask as Eykholt et al. (2018)—a collection of small rectangles.
E.3 ADVERSARIAL PATCH ATTACK
We used gradient ascent to maximize the log probability of the targeted class P [ytarget|x], as in the original paper (Brown et al., 2018). When implementing the adversarial patch, we used a square patch rather than the circular patch in the original paper; we don’t anticipate this choice to be practically consequential. We randomly chose the position and direction of the patch, used the learning rate of 5, and fixed the number of attack iterations to 100 for each image. We varied the attack region (mask) R ∈ {0%, 5%, 10%, 15%, 20%, 25%}. For the face recognition dataset, we used 27 images (9 classes (without targeted class) × 3 images in each class) to design the patch, and then ran the attack over 20 epochs. For the smaller traffic sign dataset, we used 15 images (15 classes (without targeted class) × 1 image in each class) to design the patch, and then ran the attack over 5 epochs. Note that when evaluating the adversarial patch, we used the validation set without the targeted class images.
F.1 FACE RECOGNITION AND EYEGLASS FRAME ATTACK
F ADDITIONAL EXPERIMENTS WITH DOA
F.2 TRAFFIC SIGN CLASSIFICATION AND THE STOP SIGN ATTACK
F.3.1 FACE RECOGNITION
F.3 EVALUATION WITH THE ADVERSARIAL PATCH ATTACK
F.3.2 TRAFFIC SIGN CLASSIFICATION
G.2 TRAFFIC SIGN CLASSIFICATION
G.1 FACE RECOGNITION
G EXAMPLES OF PHYSICALLY REALIZABLE ATTACK AGAINST ALL DEFENSE MODELS
H EFFECTIVENESS OF DOA METHODS AGAINST l∞ ATTACKS
For completeness, this section includes evaluation of DOA in the context of l∞-bounded attacks implemented using PGD, though these are outside the scope of our threat model.
Table 23 presents results of several variants of DOA in the context of PGD attacks in the context of face recognition, while Table 24 considers these in traffic sign classification. The results are quite consistent with intuition: DOA is largely unhelpful against these attacks. The reason is that DOA fundamentally assumes that the attacker only modifies a relatively small proportion (∼5%) of the scene (and the resulting image), as otherwise the physical attack would be highly suspicious. l∞ bounded attacks, on the other hand, modify all pixels.
I EFFECTIVENESS OF DOA METHODS AGAINST l0 ATTACKS
In addition to considering physical attacks, we evaluate the effectiveness of DOA against Jacobianbased saliency map attacks (JSMA) (Papernot et al., 2015) for implementing l0-constraint adversarial examples. As Figure 13 shows, in both the face recognition and traffic sign classification, DOA is able to improve classificatio robustness compared to the original model.
Face Recognition Traffic Sign Classification
J EFFECTIVENESS OF DOA AGAINST OTHER MASK-BASED ATTACKS
To further illustrate the ability of DOA to generalize, we evaluate its effectiveness in the context of three additional occlusion patterns: a union of triangles and circle, a single larger triangle, and a heart pattern.
As the results in Figures 14 and 15 suggest, DOA is able to generalize successfully to a variety of physical attack patterns. It is particularly noteworthy that the larger patterns (large triangle—middle of the figure, and large heart—right of the figure) are actually quite suspicious (particularly the heart pattern), as they occupy a significant fraction of the image (the heart mask, for example, accounts for 8% of the face).
Mask1 Mask2 Mask3
Abbie Cornish Abbie Cornish Abbie Cornish
Mask1 Mask2 Mask3
Keep Right Keep Right Keep Right | 1. What is the main contribution of the paper regarding image classification defense?
2. What are the strengths of the proposed approach, particularly in its improved robustness?
3. Do you have any concerns or suggestions regarding the training procedure or the discussion of related works?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any questions or curiosities regarding the effectiveness of the proposed method against other types of realistic perturbations? | Review | Review
The paper aims to provide a defense to physically realizable attacks for image classifiers. First, the paper demonstrates that Lp-ball robustness (obtained via adversarial training or randomized smoothing) do not necessarily result in robustness to physical attacks such as adversarial stickers. Next, the paper proposes a variant of adversarial training (using adversarial rectangles) and shows empirically that such a method results in much improved robustness to the aforementioned physical attacks.
I vote to accept this paper. It has clear motivation, is very clearly written, evaluates proper benchmarks from recent literature, and proposes a new method that shows a clear improvement over benchmarks in literature. The goal is clearly laid out in Section 3, and the paper justifies its claims via various experiments. Overall, I think it is a good contribution to the adversarial examples literature, as it provides robustness against more “real-world” attacks.
In particular, the results from Fig. 2 and Fig. 3, while not surprising, appear to be done thoroughly, as the authors evaluate against various forms of adversarial training and various degrees of randomized smoothing. Later, the results in Fig. 4 and Fig. 5 show a clear benefit from their method.
I do have a few pieces of feedback that I think could improve the work.
In section 5.1, do you really only train for 5 epochs? Is this just a fine-tuning procedure after standard training is performed, or is the entire training procedure 5 epochs? That seems especially short for getting any amount of accuracy. I think it is worth clarifying.
In Section 3 (or in the Related Works), I would suggest that the authors do mention related works about other types of realistic adversarial examples or perturbations, such as those generated via physical transformations like translations and rotations [1] or even common corruption robustness [2]. This is especially relevant since the authors claim to be the “first investigation of this issue in computer vision applications,” so it’s worth clarifying that the authors do not claim to be the first work about robustness to all realistic perturbations.
One thing I would be curious to see is if the DOA training method provides any robustness to standard Lp-ball adversarial examples or to perturbations like rotations.
Additional Feedback:
- In Related Works, I think there are some places where it would look better to use parentheses around the citations.
[1] https://arxiv.org/abs/1712.02779
[2] https://arxiv.org/abs/1903.12261 |
ICLR | Title
Defending Against Physically Realizable Attacks on Image Classification
Abstract
We study the problem of defending deep neural network approaches for image classification from physically realizable attacks. First, we demonstrate that the two most scalable and effective methods for learning robust models, adversarial training with PGD attacks and randomized smoothing, exhibit limited effectiveness against three of the highest profile physical attacks. Next, we propose a new abstract adversarial model, rectangular occlusion attacks, in which an adversary places a small adversarially crafted rectangle in an image, and develop two approaches for efficiently computing the resulting adversarial examples. Finally, we demonstrate that adversarial training using our new attack yields image classification models that exhibit high robustness against the physically realizable attacks we study, offering the first effective generic defense against such attacks. 1
1 INTRODUCTION
State-of-the-art effectiveness of deep neural networks has made it the technique of choice in a variety of fields, including computer vision (He et al., 2016), natural language processing (Sutskever et al., 2014), and speech recognition (Hinton et al., 2012). However, there have been a myriad of demonstrations showing that deep neural networks can be easily fooled by carefully perturbing pixels in an image through what have become known as adversarial example attacks (Szegedy et al., 2014; Goodfellow et al., 2015; Carlini & Wagner, 2017b; Vorobeychik & Kantarcioglu, 2018). In response, a large literature has emerged on defending deep neural networks against adversarial examples, typically either proposing techniques for learning more robust neural network models (Wong & Kolter, 2018; Wong et al., 2018; Raghunathan et al., 2018b; Cohen et al., 2019; Madry et al., 2018), or by detecting adversarial inputs (Metzen et al., 2017; Xu et al., 2018).
Particularly concerning, however, have been a number of demonstrations that implement adversarial perturbations directly in physical objects that are subsequently captured by a camera, and then fed through the deep neural network classifier (Boloor et al., 2019; Eykholt et al., 2018; Athalye et al., 2018b; Brown et al., 2018; Bhattad et al., 2020). Among the most significant of such physical attacks on deep neural networks are three that we specifically consider here: 1) the attack which fools face recognition by using adversarially designed eyeglass frames (Sharif et al., 2016), 2) the attack which fools stop sign classification by adding adversarially crafted stickers (Eykholt et al., 2018), and 3) the universal adversarial patch attack, which causes targeted misclassification of any object with the adversarially designed sticker (patch) (Brown et al., 2018). Oddly, while considerable attention has been devoted to defending against adversarial perturbation attacks in the digital space, there are no effective methods specifically to defend against such physical attacks.
Our first contribution is an empirical evaluation of the effectiveness of conventional approaches to robust ML against two physically realizable attacks: the eyeglass frame attack on face recognition (Sharif et al., 2016) and the sticker attack on stop signs (Eykholt et al., 2018). Specifically, we study the performance on adversarial training and randomized smoothing against these attacks, and show that both have limited effectiveness in this context (quite ineffective in some settings, and somewhat more effective, but still not highly robust, in others), despite showing moderate effectiveness against l∞ and l2 attacks, respectively.
1The code can be found at https://github.com/tongwu2020/phattacks
Our second contribution is a novel abstract attack model which more directly captures the nature of common physically realizable attacks than the conventional lp-based models. Specifically, we consider a simple class of rectangular occlusion attacks in which the attacker places a rectangular sticker onto an image, with both the location and the content of the sticker adversarially chosen. We develop several algorithms for computing such adversarial occlusions, and use adversarial training to obtain neural network models that are robust to these. We then experimentally demonstrate that our proposed approach is significantly more robust against physical attacks on deep neural networks than adversarial training and randomized smoothing methods that leverage lp-based attack models.
Related Work While many approaches for defending deep learning in vision applications have been proposed, robust learning methods have been particularly promising, since alternatives are often defeated soon after being proposed (Madry et al., 2018; Raghunathan et al., 2018a; Wong & Kolter, 2018; Vorobeychik & Kantarcioglu, 2018). The standard solution approach for this problem is an adaptation of Stochastic Gradient Descent (SGD) where gradients are either with respect to the loss at the optimal adversarial perturbation for each i (or approximation thereof, such as using heuristic local search (Goodfellow et al., 2015; Madry et al., 2018) or a convex over-approximation (Raghunathan et al., 2018b; Wang et al., 2018)), or with respect to the dual of the convex relaxation of the attacker maximization problem (Raghunathan et al., 2018a; Wong & Kolter, 2018; Wong et al., 2018). Despite these advances, adversarial training a la Madry et al. (2018) remains the most practically effective method for hardening neural networks against adversarial examples with l∞-norm perturbation constraints. Recently, randomized smoothing emerged as another class of techniques for obtaining robustness (Lecuyer et al., 2019; Cohen et al., 2019), with the strongest results in the context of l2-norm attacks. In addition to training neural networks that are robust by construction, a number of methods study the problem of detecting adversarial examples (Metzen et al., 2017; Xu et al., 2018), with mixed results (Carlini & Wagner, 2017a). Of particular interest is recent work on detecting physical adversarial examples (Chou et al., 2018). However, detection is inherently weaker than robustness, which is our goal, as even perfect detection does not resolve the question of how to make decisions on adversarial examples. Finally, our work is in the spirit of other recent efforts that characterize robustness of neural networks to physically realistic perturbations, such as translations, rotations, blurring, and contrast (Engstrom et al., 2019; Hendrycks & Dietterich, 2019).
2 BACKGROUND
2.1 ADVERSARIAL EXAMPLES IN THE DIGITAL AND PHYSICAL WORLD
Adversarial examples involve modifications of input images that are either invisible to humans, or unsuspicious, and that cause systematic misclassification by state-of-the-art neural networks (Szegedy et al., 2014; Goodfellow et al., 2015; Vorobeychik & Kantarcioglu, 2018). Commonly, approaches for generating adversarial examples aim to solve an optimization problem of the following form:
argmax δ L(f(x+ δ;θ), y) s.t. ‖δ‖p ≤ , (1)
where x is the original input image, δ is the adversarial perturbation, L(·) is the adversary’s utility function (for example, the adversary may wish to maximize the cross-entropy loss), and ‖ · ‖p is some lp norm. While a host of such digital attacks have been proposed, two have come to be viewed as state of the art: the attack developed by Carlini & Wagner (2017b), and the projected gradient descent attack (PGD) by Madry et al. (2018).
While most of the work to date has been on attacks which modify the digital image directly, we focus on a class of physical attacks which entail modifying the actual object being photographed in order to fool the neural network that subsequently takes its digital representation as input. The attacks we will focus on will have three characteristics:
1. The attack can be implemented in the physical space (e.g., modifying the stop sign); 2. the attack has low suspiciousness; this is operationalized by modifying only a small part of
the object, with the modification similar to common “noise” that obtains in the real world; for example, stickers on a stop sign would appear to most people as vandalism, but covering the stop sign with a printed poster would look highly suspicious; and
3. the attack causes misclassification by state-of-the-art deep neural network.
Since our ultimate purpose is defense, we will not concern ourselves with the issue of actually implementing the physical attacks. Instead, we will consider the digital representation of these attacks, ignoring other important issues, such as robustness to many viewpoints and printability. For example, in the case where the attack involves posting stickers on a stop sign, we will only be concerned with simulating such stickers on digital images of stop signs. For this reason, we refer to such attacks physically realizable attacks, to allude to the fact that it is possible to realize them in practice. It is evident that physically realizable attacks represent a somewhat stronger adversarial model than their actual implementation in the physical space. Henceforth, for simplicity, we will use the terms physical attacks and physically realizable attacks interchangeably.
We consider three physically realizable attacks. The first is the attack on face recognition by Sharif et al. (2016), in which the attacker adds adversarial noise inside printed eyeglass frames that can subsequently be put on to fool the deep neural network (Figure 1a). The second attack posts adversarially crafted stickers on a stop sign to cause it to be misclassified as another road sign, such as the speed limit sign (Figure 1b) (Eykholt et al., 2018). The third, adversarial patch, attack designs a patch (a sticker) with adversarial noise that can be placed onto an arbitrary object, causing that object to be misclassified by a deep neural network (Brown et al., 2018).
2.2 ADVERSARIALLY ROBUST DEEP LEARNING
While numerous approaches have been proposed for making deep learning robust, many are heuristic and have soon after been defeated by more sophisticated attacks (Carlini & Wagner, 2017b; He et al., 2017; Carlini & Wagner, 2017a; Athalye et al., 2018a). Consequently, we focus on principled approaches for defense that have not been broken. These fall broadly into two categories: robust learning and randomized smoothing. We focus on a state-of-the-art representative from each class.
Robust Learning The goal of robust learning is to minimize a robust loss, defined as follows:
θ∗ = argmin θ E (x,y)∼D [ max ‖δ‖p≤ L(f(x+ δ;θ), y) ] , (2)
where D denotes the training data set. In itself this is a highly intractable problem. Several techniques have been developed to obtain approximate solutions. Among the most effective in practice is the adversarial training approach by Madry et al. (2018), who use the PGD attack as an approximation to the inner optimization problem, and then take gradient descent steps with respect to the associated adversarial inputs. In addition, we consider a modified version of this approach termed curriculum adversarial training (Cai et al., 2018). Our implementation of this approach proceeds as follows: first, apply adversarial training for a small , then increase and repeat adversarial training, and so on, increasing until we reach the desired level of adversarial noise we wish to be robust to.
Randomized Smoothing The second class of techniques we consider works by adding noise to inputs at both training and prediction time. The key idea is to construct a smoothed classifier g(·) from a base classifier f(·) by perturbing the input x with isotropic Gaussian noise with variance σ. The prediction is then made by choosing a class with the highest probability measure with respect to the induced distribution of f(·) decisions:
g(x) = argmax c P (f(x+ σ) = c), σ ∼ N
( 0, σ2I ) . (3)
To achieve provably robust classification in this manner one typically trains the classifier f(·) by adding Gaussian noise to inputs at training time (Lecuyer et al., 2019; Cohen et al., 2019).
3 ROBUSTNESS OF CONVENTIONAL ROBUST ML METHODS AGAINST PHYSICAL ATTACKS
Most of the approaches for endowing deep learning with adversarial robustness focus on adversarial models in which the attacker introduces lp-bounded adversarial perturbations over the entire input. Earlier we described two representative approaches in this vein: adversarial training, commonly focused on robustness against l∞ attacks, and randomized smoothing, which is most effective against l2 attacks (although certification bounds can be extended to other lp norms as well). We call these methods conventional robust ML.
In this section, we ask the following question:
Are conventional robust ML methods robust against physically realizable attacks?
This is similar to the question was asked in the context of malware classifier evasion by Tong et al. (2019), who found that lp-based robust ML methods can indeed be successful in achieving robustness against realizable evasion attacks. Ours is the first investigation of this issue in computer vision applications and for deep neural networks, where attacks involve adversarial masking of objects.2
We study this issue experimentally by considering two state-of-the-art approaches for robust ML: adversarial training a-la-Madry et al. (2018), along with its curriculum learning variation (Cai et al., 2018), and randomized smoothing, using the implementation by Cohen et al. (2019). These approaches are applied to defend against two physically realizable attacks described in Section 2.1: an attack on face recognition which adds adversarial eyeglass frames to faces (Sharif et al., 2016), and an attack on stop sign classification which adds adversarial stickers to a stop sign to cause misclassification (Eykholt et al., 2018).
We consider several variations of adversarial training, as a function of the l∞ bound, , imposed on the adversary. Just as Madry et al. (2018), adversarial instances in adversarial training were generated using PGD. We consider attacks with ∈ {4, 8} (adversarial training failed to make progress when we used = 16). For curriculum adversarial training, we first performed adversarial training with = 4, then doubled to 8 and repeated adversarial training with the model robust to = 4, then doubled again, and so on. In the end, we learned models for ∈ {4, 8, 16, 32}. For all versions of adversarial training, we consider 7 and 50 iterations of the PGD attack. We used the learning rate of /4 for the former and 1 for the latter. In all cases, pixels are in 0 ∼ 255 range and retraining was performed for 30 epochs using the ADAM optimizer.
For randomized smoothing, we consider noise levels σ ∈ {0.25, 0.5, 1} as in Cohen et al. (2019), and take 1,000 Monte Carlo samples at test time.
3.1 ADVERSARIAL EYEGLASSES IN FACE RECOGNITION
We applied white-box dodging (untargeted) attacks on the face recognition systems (FRS) from Sharif et al. (2016). We used both the VGGFace data and transferred VGGFace CNN model for the face recognition task, subselecting 10 individuals, with 300-500 face images for each. Further details about the dataset, CNN architecture, and training procedure are in Appendix A. For the attack, we used identical frames as in Sharif et al. (2016) occupying 6.5% of the pixels. Just as Sharif et al. (2016), we compute attacks (that is, adversarial perturbations inside the eyeglass frame area) by using the learning rate 20 as well as momentum value 0.4, and vary the number of attack iterations between 0 (no attack) and 300.
Figure 2 presents the results of classifiers obtained from adversarial training (left) as well as curriculum adversarial training (middle), in terms of accuracy (after the attack) as a function of the number of iterations of the Sharif et al. (2016) eyeglass frame attack. First, it is clear that none of the variations of adversarial training are particularly effective once the number of physical attack iterations is
2Several related efforts study robustness of deep neural networks to other variants of physically realistic perturbations (Engstrom et al., 2019; Hendrycks & Dietterich, 2019).
above 20. The best performance in terms of adversarial robustness is achieved by adversarial training with = 8, for approaches using either 7 or 50 PGD iterations (the difference between these appears negligible). However, non-adversarial accuracy for these models is below 70%, a ∼20% drop in accuracy compared to the original model. Moreover, adversarial accuracy is under 40% for sufficiently strong physical attacks. Curriculum adversarial training generally achieves significantly higher non-adversarial accuracy, but is far less robust, even when trained with PGD attacks that use = 32.
Figure 2 (right) shows the performance of randomized smoothing when faced with the eyeglass frames attack. It is readily apparent that randomized smoothing is ineffective at deflecting this physical attack: even as we vary the amount of noise we add, accuracy after attacks is below 20% even for relatively weak attacks, and often drops to nearly 0 for sufficiently strong attacks.
3.2 ADVERSARIAL STICKERS ON STOP SIGNS
Following Eykholt et al. (2018), we use the LISA traffic sign dataset for our experiments, and 40 stop signs from this dataset as our test data and perform untargeted attacks (this is in contrast to the original work, which is focused on targeted attacks). For the detailed description of the data and the CNN used for traffic sign prediction, see Appendix A. We apply the same settings as in the original attacks and use ADAM optimizer with the same parameters. Since we observed few differences in performance between running PGD for 7 vs. 50 iterations, adversarial training methods in this section all use 7 iterations of PGD.
Again, we begin by considering adversarial training (Figure 3, left and middle). In this case, both the original and curriculum versions of adversarial training with PGD are ineffective when = 32 (error rates on clean data are above 90%); these are consequently omitted from the plots. Curriculum adversarial training with = 16 has the best performance on adversarial data, and works well on clean data. Surprisingly, most variants of adversarial training perform at best marginally better than the original model against the stop sign attack. Even the best variant has relatively poor performance, with robust accuracy under 50% for stronger attacks.
Figure 3 (right) presents the results for randomized smoothing. In this set of experiments, we found that randomized smoothing performs inconsistently. To address this, we used 5 random seeds to repeat the experiments, and use the resulting mean values in the final results. Here, the best variant
uses σ = 0.25, and, unlike experiments with the eyeglass frame attack, significantly outperforms adversarial training, reaching accuracy slightly above 60% even for the stronger attacks. Nevertheless, even randomized smoothing results in significant degradation of effectiveness on adversarial instances (nearly 40%, compared to clean data).
3.3 DISCUSSION
There are two possible reasons why conventional robust ML perform poorly against physical attacks: 1) adversarial models involving lp-bounded perturbations are too hard to enable effective robust learning, and 2) the conventional attack model is too much of a mismatch for realistic physical attacks. In Appendix B, we present evidence supporting the latter. Specifically, we find that conventional robust ML models exhibit much higher robustness when faced with the lp-bounded attacks they are trained to be robust to.
4 PROPOSED APPROACH: DEFENSE AGAINST OCCLUSION ATTACKS (DOA)
As we observed in Section 3, conventional models for making deep learning robust to attack can perform quite poorly when confronted with physically realizable attacks. In other words, the evidence strongly suggests that the conventional models of attacks in which attackers can make lp-bounded perturbations to input images are not particularly useful if one is concerned with the main physical threats that are likely to be faced in practice. However, given the diversity of possible physical attacks one may perpetrate, is it even possible to have a meaningful approach for ensuring robustness against a broad range of physical attacks? For example, the two attacks we considered so far couldn’t be more dissimilar: in one, we engineer eyeglass frames; in another, stickers on a stop sign. We observe that the key common element in these attacks, and many other physical attacks we may expect to encounter, is that they involve the introduction of adversarial occlusions to a part of the input. The common constraint faced in such attacks is to avoid being suspicious, which effectively limits the size of the adversarial occlusion, but not necessarily its shape or location. Next, we introduce a simple abstract model of occlusion attacks, and then discuss how such attacks can be computed and how we can make classifiers robust to them.
4.1 ABSTRACT ATTACK MODEL: RECTANGULAR OCCLUSION ATTACKS (ROA)
We propose the following simple abstract model of adversarial occlusions of input images. The attacker introduces a fixed-dimension rectangle. This rectangle can be placed by the adversary anywhere in the image, and the attacker can furthermore introduce l∞ noise inside the rectangle with an exogenously specified high bound (for example, = 255, which effectively allows addition of arbitrary adversarial noise). This model bears some similarity to l0 attacks, but the rectangle imposes a contiguity constraint, which reflects common physical limitations. The model is clearly abstract: in practice, for example, adversarial occlusions need not be rectangular or have fixed dimensions (for example, the eyeglass frame attack is clearly not rectangular), but at the same time cannot usually be arbitrarily superimposed on an image, as they are implemented in the physical environment. Nevertheless, the model reflects some of the most important aspects common to many physical attacks, such as stickers placed on an adversarially chosen portion of the object we wish to identify. We call our attack model a rectangular occlusion attack (ROA). An important feature of this attack is that it is untargeted: since our ultimate goal is to defend against physical attacks whatever their target, considering untargeted attacks obviates the need to have precise knowledge about the attacker’s goals. For illustrations of the ROA attack, see Appendix C.
4.2 COMPUTING ATTACKS
The computation of ROA attacks involves 1) identifying a region to place the rectangle in the image, and 2) generating fine-grained adversarial perturbations restricted to this region. The former task can be done by an exhaustive search: consider all possible locations for the upper left-hand corner of the rectangle, compute adversarial noise inside the rectangle using PGD for each of these, and choose the worst-case attack (i.e., the attack which maximizes loss computed on the resulting image). However, this approach would be quite slow, since we need to perform PGD inside the rectangle for every possible position. Our approach, consequently, decouples these two tasks. Specifically, we first
perform an exhaustive search using a grey rectangle to find a position for it that maximizes loss, and then fix the position and apply PGD inside the rectangle.
An important limitation of the exhaustive search approach for ROA location is that it necessitates computations of the loss function for every possible location, which itself requires full forward propagation each time. Thus, the search itself is still relatively slow. To speed the process up further, we use the gradient of the input image to identify candidate locations. Specifically, we select a subset of C locations for the sticker with the highest magnitude of the gradient, and only exhaustively search among these C locations. C is exogenously specified to be small relative to the number of pixels in the image, which significantly limits the number of loss function evaluations. Full details of our algorithms for computing ROA are provided in Appendix D.
4.3 DEFENDING AGAINST ROA
Once we are able to compute the ROA attack, we apply the standard adversarial training approach for defense. We term the resulting classifiers robust to our abstract adversarial occlusion attacks Defense against Occlusion Attacks (DOA), and propose these as an alternative to conventional robust ML for defending against physical attacks. As we will see presently, this defense against ROA is quite adequate for our purposes.
5 EFFECTIVENESS OF DOA AGAINST PHYSICALLY REALIZABLE ATTACKS
We now evaluate the effectiveness of DOA—that is, adversarial training using the ROA threat model we introduced—against physically realizable attacks (see Appendix G for some examples that defeat conventional methods but not DOA). Recall that we consider only digital representations of the corresponding physical attacks. Consequently, we can view our results in this section as a lower bound on robustness to actual physical attacks, which have to deal with additional practical constraints, such as being robust to multiple viewpoints. In addition to the two physical attacks we previously considered, we also evaluate DOA against the adversarial patch attack, implemented on both face recognition and traffic sign data.
5.1 DOA AGAINST ADVERSARIAL EYEGLASSES
We consider two rectangle dimensions resulting in comparable area: 100 × 50 and 70 × 70, both in pixels. Thus, the rectangles occupy approximately 10% of the 224 × 224 face images. We used {30, 50} iterations of PGD with = 255/2 to generate adversarial noise inside the rectangle, and with learning rate α = {8, 4} correspondingly. For the gradient version of ROA, we chooseC = 30. DOA adversarial training is performed for 5 epochs with a learning rate of 0.0001.
Figure 4 (left) presents the results comparing the effectiveness of DOA against the eyeglass frame attack on face recognition to adversarial training and randomized smoothing (we took the most robust variants of both of these). We can see that DOA yields significantly more robust classifiers for this domain. The gradient-based heuristic does come at some cost, with performance slightly worse
than when we use exhaustive search, but this performance drop is relatively small, and the result is still far better than conventional robust ML approaches. Figure 4 (middle and right) compares the performance of DOA between two rectangle variants with different dimensions. The key observation is that as long as we use enough iterations of PGD inside the rectangle, changing its dimensions (keeping the area roughly constant) appears to have minimal impact.
5.2 DOA AGAINST THE STOP SIGN ATTACK
We now repeat the evaluation with the traffic sign data and the stop sign attack. In this case, we used 10× 5 and 7× 7 rectangles covering ∼5 % of the 32× 32 images. We set C = 10 for the gradientbased ROA. Implementation of DOA is otherwise identical as in the face recognition experiments above.
We present our results using the square rectangle, which in this case was significantly more effective; the results for the 10 × 5 rectangle DOA attacks are in Appendix F. Figure 5 (left) compares the effectiveness of DOA against the stop sign attack on traffic sign data with the best variants of adversarial training and randomized smoothing. Our results here are for 30 iterations of PGD; in Appendix F, we study the impact of varying the number of PGD iterations. We can observe that DOA is again significantly more robust, with robust accuracy over 90% for the exhaustive search variant, and ∼85% for the gradient-based variant, even for stronger attacks. Moreover, DOA remains 100% effective at classifying stop signs on clean data, and exhibits ∼95% accuracy on the full traffic sign classification task.
5.3 DOA AGAINST ADVERSARIAL PATCH ATTACKS
Finally, we evaluate DOA against the adversarial patch attacks. In these attacks, an adversarial patch (e.g., sticker) is designed to be placed on an object with the goal of inducing a target prediction. We study this in both face recognition and traffic sign classification tasks. Here, we present the results for face recognition; further detailed results on both datasets are provided in Appendix F.
As we can see from Figure 5 (right), adversarial patch attacks are quite effective once the attack region (fraction of the image) is 10% or higher, with adversarial training and randomized smoothing both performing rather poorly. In contrast, DOA remains highly robust even when the adversarial patch covers 20% of the image.
6 CONCLUSION
As we have shown, conventional methods for making deep learning approaches for image classification robust to physically realizable attacks tend to be relatively ineffective. In contrast, a new threat model we proposed, rectangular occlusion attacks (ROA), coupled with adversarial training, achieves high robustness against several prominent examples of physical attacks. While we explored a number of variations of ROA attacks as a means to achieve robustness against physical attacks, numerous questions remain. For example, can we develop effective methods to certify robustness against ROA, and are the resulting approaches as effective in practice as our method based on a combination of heuristically computed attacks and adversarial training? Are there other types of occlusions that are more effective? Answers to these and related questions may prove a promising
path towards practical robustness of deep learning when deployed for downstream applications of computer vision such as autonomous driving and face recognition.
ACKNOWLEDGMENTS
This work was partially supported by the NSF (IIS-1905558, IIS-1903207), ARO (W911NF-19-10241), and NVIDIA.
A DESCRIPTION OF DATASETS AND DEEP LEARNING CLASSIFIERS
A.1 FACE RECOGNITION
The VGGFace dataset3 (Parkhi et al., 2015) is a benchmark for face recognition, containing 2622 subjusts with 2.6 million images in total. We chose ten subjects: A. J. Buckley, A. R. Rahman, Aamir Khan, Aaron Staton, Aaron Tveit, Aaron Yoo, Abbie Cornish, Abel Ferrara, Abigail Breslin, and Abigail Spencer, and subselected face images pertaining only to these individuals. Since approximately half of the images cannot be downloaded, our final dataset contains 300-500 images for each subject.
We used the standard corp-and-resize method to process the data to be 224 × 224 pixels, and split the dataset into training, validation, and test according to a 7:2:1 ratio for each subject. In total, the data set has 3178 images in the training set, 922 images in the validation set, and 470 images in the test set.
We use the VGGFace convolutional neural network (Parkhi et al., 2015) model, a variant of the VGG16 model containing 5 convolutional layer blocks and 3 fully connected layers. We make use of standard transfer learning as we only classify 10 subjects, keeping the convolutional layers as same as VGGFace structure,4 but changing the fully connected layer to be 1024 → 1024 →10 instead of 4096→ 4096→2622. Specifically, in our Pytorch implementation, we convert the images from RGB to BGR channel orders and subtract the mean value [129.1863, 104.7624, 93.5940] in order to use the pretrained weights from VGG-Face on convolutional layers. We set the batch size to be 64 and use Pytorch built-in Adam Optimizer with an initial learning rate of 10−4 and default parameters in Pytorch.5 We drop the learning rate by 0.1 every 10 epochs. Additionally, we used validation set accuracy to keep track of model performance and choose a model in case of overfitting. After 30 epochs of training, the model successfully obtains 98.94 % on test data.
A.2 TRAFFIC SIGN CLASSIFICATION
To be consistent with (Eykholt et al., 2018), we select the subset of LISA which contains 47 different U.S. traffic signs (Møgelmose et al., 2012). To alleviate the problem of imbalance and extremely blurry data, we picked 16 best quality signs with 3509 training and 1148 validation data points. From the validation data, we obtain the test data that includes only 40 stop signs to evaluate performance with respect to the stop sign attack, as done by Eykholt et al. (2018). In the main body of the paper, we present results only on this test data to evaluate robustness to stop sign attacks. In the appendix below, we also include performance on the full validation set without adversarial manipulation.
All the data was processed by standard crop-and-resize to 32× 32 pixels. We use the LISA-CNN architecture defined in (Eykholt et al., 2018), and construct a convolutional neural network containing three convolutional layers and one fully connected layer. We use the Adam Optimizer with initial learning rate of 10−1 and default parameters 5, dropping the learning rate by 0.1 every 10 epochs. We set the batch size to be 128. After 30 epochs, we achieve the 98.69 % accuracy on the validation set, and 100% accuracy in identifying the stop signs in our test data.
3http://www.robots.ox.ac.uk/˜vgg/data/vgg_face/. 4External code that we use for transfering VGG-Face to Pytorch Framework is available at https://
github.com/prlz77/vgg-face.pytorch 5Default Pytorch Adam parameters stand for β1=0.9, β1=0.999 and =10−8
B EFFECTIVENESS OF CONVENTIONAL ROBUST ML METHODS AGAINST l∞ AND l2 ATTACKS
In this appendix, we show that adversarial training and randomized smoothing degrade more gracefully when faced with attacks that they are designed for. In particular, we consider here variants of projected gradient descent (PGD) for both the l∞ and l2 attacks Madry et al. (2018). In particular, the form of PGD for the l∞ attack is
xt+1 = Proj(xt + αsgn(∇L(xt; θ))),
where Proj is a projection operator which clips the result to be feasible, xt the adversarial example in iteration t, α the learning rate, andL(·) the loss function. In the case of an l2 attack, PGD becomes
xt+1 = Proj ( xt + α
∇L(xt; θ) ‖∇L(xt; θ)‖2
) ,
where the projection operator normalizes the perturbation δ = xt+1 − xt to have ‖δ‖2 ≤ if it doesn’t already Kolter & Madry (2019).
The experiments were done on the face recognition and traffic sign datasets, but unlike physical attacks on stop signs, we now consider adversarial perturbations to all sign images.
B.1 FACE RECOGNITION
We begin with our results on the face recognition dataset. Tables 1 and 2 present results for (curriculum) adversarial training for varying of the l∞ attacks, separately for training and evaluation. As we can see, curriculum adversarial training with = 16 is generally the most robust, and remains reasonably effective for relatively large perturbations. However, we do observe a clear tradeoff between accuracy on non-adversarial data and robustness, as one would expect.
Table 3 presents the results of using randomized smoothing on face recognition data, when facing the l2 attacks. Again, we observe a high level of robustness and, in most cases, relatively limited drop in performance, with σ = 0.5 perhaps striking the best balance.
B.2 TRAFFIC SIGN CLASSIFICATION
Tables 4 and 5 present evaluation on traffic sign data for curriculum adversarial training against the l∞ attack for varying . As with face recognition data, we can observe that the approaches tend to be relatively robust, and effective on non-adversarial data for adversarial training methods using < 32.
The results of randomized smoothing on traffic sign data are given in Table 6. Since images are smaller here than in VGGFace, lower values of for the l2 attacks are meaningful, and for ≤ 1 we
generally see robust performance on randomized smoothing, with σ = 0.5 providing a good balance between non-adversarial accuracy and robustness, just as before.
C EXAMPLES OF ROA
Figure 6 provides several examples of the ROA attack in the context of face recognition. Note that in these examples, the adversaries choose to occlude the noise on upper lip and eye areas of the image, and, indeed, this makes the face more challenging to recognize even to a human observer.
D DETAILED DESCRIPTION OF THE ALGORITHMS FOR COMPUTING THE RECTANGULAR OCCLUSION ATTACKS
Our basic algorithm for computing rectangular occlusion attacks (ROA) proceeds through the following two steps:
1. Iterate through possible positions for the rectangle’s upper left-hand corner point in the image. Find the position for a grey rectangle (RGB value =[127.5,127.5,127.5]) in the image that maximizes loss.
2. Generate high- l∞ noise inside the rectangle at the position computed in step 1.
Algorithm 1 presents the full algorithm for identifying the ROA position, which amounts to exhaustive search through the image pixel region. This algorithm has several parameters. First, we assume that images are squares with dimensions N2. Second, we introduce a stride parameter S. The purpose of this parameter is to make location computation faster by only considering every other Sth pixel during the search (in other words, we skip S pixels each time). For our implementation of ROA attacks, we choose the stride parameter S = 5 for face recognition and S = 2 for traffic sign classification.
Algorithm 1 Computation of ROA position using exhaustive search. Input: Data: Xi, yi; Test data shape: N ×N ; Target model parameters: θ; Stride: S ; Output: ROA Position: (j′, k′) 1.function ExhaustiveSearching(Model,Xi, yi, N, S) 2. for j in range(N/S) do: 3. for k in range(N/S) do: 4. Generate the adversarial Xadvi image by: 5. place a grey rectangle onto the image with top-left corner at (j × S, k × S); 6. if L(Xadvi , yi, θ) is higher than previous loss: 7. Update (j′, k′) = (j, k) 8. end for 9. end for 10. return (j′, k′)
Algorithm 2 Computation of ROA position using gradient-based search. Input: Data: Xi, yi; Test data shape: N ×N ; Target Model: θ; Stride: S ; Number of Potential Candidates: C;
Output: Best Sticker Position: (j′, k′)
1.function GradientBasedSearch(Xi, yi, N, S,C, θ) 2. Calculate the gradient∇L of Loss(Xi, yi, θ) w.r.t. Xi 3. J,K = HelperSearching(∇L,N, S,C) 4. for j, k in J,K do: 5. Generate the adversarial Xadvi image by: 6. put the sticker on the image where top-left corner at (j × S, k × S); 7. if Loss(Xadvi , yi, θ) is higher than previous loss: 8. Update (j′, k′) = (j, k) 9. end for 10. return (j′, k′)
1.function HelperSearching(∇L,N, S,C) 2. for j in range(N/S) do: 3. for k in range(N/S) do: 4. Calculate the Sensitivity value L = ∑ i∈rectangle(∇Li)2 where top-left corner at (j × S, k × S); 6. if the Sensitivity value L is in top C of previous values: 7. Put (j, k) in J,K and discard (js, ks) with lowest L 8. end for 9. end for 10. return J,K
Despite introducing the tunable stride parameter, the search for the best location for ROA still entails a large number of loss function evaluations, which are somewhat costly (since each such evaluation means a full forward pass through the deep neural network), and these costs add up quickly. To speed things up, we consider using the magnitude of the gradient of the loss as a measure of sensitivity of particular regions to manipulation. Specifically, suppose that we compute a gradient ∇L, and let ∇Li be the gradient value for a particular pixel i in the image. Now, we can iterate over the possible ROA locations, but for each location compute the gradient of the loss at that location corresponding to the rectangular region. We do this by adding squared gradient values (∇Li)2 over pixels i in the rectangle. We use this approach to find the top C candidate locations for the rectangle. Finally, we consider each of these, computing the actual loss for each location, to find the position of ROA. The full algorithm is provided as Algorithm 2.
Once we’ve found the place for the rectangle, our next step is to introduce adversarial noise inside it. For this, we use the l∞ version of the PGD attack, restricting perturbations to the rectangle. We used {7, 20, 30, 50} iterations of PGD to generate adversarial noise inside the rectangle, and with learning rate α = {32, 16, 8, 4} correspondingly.
Figure 8 offers a visual illustration of how gradient-based search compares to exhaustive search for computing ROA.
E DETAILS OF PHYSICALLY REALIZABLE ATTACKS
Physically realizable attacks that we study have a common feature: first, they specify a mask, which is typically precomputed, and subsequently introduce adversarial noise inside the mask area. Let M denote the mask matrix constraining the area of the perturbation δ; M has the same dimensions as the input image and contains 0s where no perturbation is allowed, and 1s in the area which can be perturbed. The physically realizable attacks we consider then solve an optimization problem of the following form:
argmax δ L(f(x+Mδ;θ), y). (4)
Next, we describe the details of the three physical attacks we consider in the main paper.
E.1 EYEGLASS FRAME ATTACKS ON FACE RECOGNITION
Following Sharif et al. (2016), we first initialized the eyeglass frame with 5 different colors, and chose the best starting color by calculating the cross-entropy loss. For each update step, we divided the gradient value by its maximum value before multiplying by the learning rate which is 20. Then we only kept the gradient value of eyeglass frame area. Finally, we clipped and rounded the pixel value to keep it in the valid range.
E.2 STICKER ATTACKS ON STOP SIGNS
Following Eykholt et al. (2018), we initialized the stickers on the stop signs with random noise. For each update step, we used the Adam optimizer with 0.1 learning rate and with default parameters. Just as for other attacks, adversarial perturbations were restricted to the mask area exogenously specified; in our case, we used the same mask as Eykholt et al. (2018)—a collection of small rectangles.
E.3 ADVERSARIAL PATCH ATTACK
We used gradient ascent to maximize the log probability of the targeted class P [ytarget|x], as in the original paper (Brown et al., 2018). When implementing the adversarial patch, we used a square patch rather than the circular patch in the original paper; we don’t anticipate this choice to be practically consequential. We randomly chose the position and direction of the patch, used the learning rate of 5, and fixed the number of attack iterations to 100 for each image. We varied the attack region (mask) R ∈ {0%, 5%, 10%, 15%, 20%, 25%}. For the face recognition dataset, we used 27 images (9 classes (without targeted class) × 3 images in each class) to design the patch, and then ran the attack over 20 epochs. For the smaller traffic sign dataset, we used 15 images (15 classes (without targeted class) × 1 image in each class) to design the patch, and then ran the attack over 5 epochs. Note that when evaluating the adversarial patch, we used the validation set without the targeted class images.
F.1 FACE RECOGNITION AND EYEGLASS FRAME ATTACK
F ADDITIONAL EXPERIMENTS WITH DOA
F.2 TRAFFIC SIGN CLASSIFICATION AND THE STOP SIGN ATTACK
F.3.1 FACE RECOGNITION
F.3 EVALUATION WITH THE ADVERSARIAL PATCH ATTACK
F.3.2 TRAFFIC SIGN CLASSIFICATION
G.2 TRAFFIC SIGN CLASSIFICATION
G.1 FACE RECOGNITION
G EXAMPLES OF PHYSICALLY REALIZABLE ATTACK AGAINST ALL DEFENSE MODELS
H EFFECTIVENESS OF DOA METHODS AGAINST l∞ ATTACKS
For completeness, this section includes evaluation of DOA in the context of l∞-bounded attacks implemented using PGD, though these are outside the scope of our threat model.
Table 23 presents results of several variants of DOA in the context of PGD attacks in the context of face recognition, while Table 24 considers these in traffic sign classification. The results are quite consistent with intuition: DOA is largely unhelpful against these attacks. The reason is that DOA fundamentally assumes that the attacker only modifies a relatively small proportion (∼5%) of the scene (and the resulting image), as otherwise the physical attack would be highly suspicious. l∞ bounded attacks, on the other hand, modify all pixels.
I EFFECTIVENESS OF DOA METHODS AGAINST l0 ATTACKS
In addition to considering physical attacks, we evaluate the effectiveness of DOA against Jacobianbased saliency map attacks (JSMA) (Papernot et al., 2015) for implementing l0-constraint adversarial examples. As Figure 13 shows, in both the face recognition and traffic sign classification, DOA is able to improve classificatio robustness compared to the original model.
Face Recognition Traffic Sign Classification
J EFFECTIVENESS OF DOA AGAINST OTHER MASK-BASED ATTACKS
To further illustrate the ability of DOA to generalize, we evaluate its effectiveness in the context of three additional occlusion patterns: a union of triangles and circle, a single larger triangle, and a heart pattern.
As the results in Figures 14 and 15 suggest, DOA is able to generalize successfully to a variety of physical attack patterns. It is particularly noteworthy that the larger patterns (large triangle—middle of the figure, and large heart—right of the figure) are actually quite suspicious (particularly the heart pattern), as they occupy a significant fraction of the image (the heart mask, for example, accounts for 8% of the face).
Mask1 Mask2 Mask3
Abbie Cornish Abbie Cornish Abbie Cornish
Mask1 Mask2 Mask3
Keep Right Keep Right Keep Right | 1. What is the focus of the review?
2. What are the strengths and weaknesses of the paper regarding its contribution and novelty?
3. How does the reviewer assess the effectiveness and scalability of the proposed approach compared to other defense strategies?
4. What are the limitations of adversarial training (AT) methods, particularly in real-world scenarios?
5. What are some alternative approaches that the author could have considered for designing a robust system? | Review | Review
First of all, the two physical attacks evaluated in this paper have similar attacking patterns, i.e., mask-based pixel attacks. So it is not surprising that DOA is more robust in these cases since DOA is trained on this attacking pattern.
Actually it has been shown that the framework of adversarial training (AT) will overfit to the attacking patterns used in training. For example, PGD-AT models are less robust to simple non-pixel transformation, like rotation, than the standard models [1]. So what DOA does is just substituting the PGD module in AT to overfit the new attacking patterns, which is of limited contribution and novelty.
Besides, AT is not really scalable compared to other simpler defense strategies like input transformation. [2] proposes a simple and effective defense based on different combinations of input transformation and its performance even surpasses some SOTA AT models with less computation.
Another advantage of these off-the-shelf defenses like input transformation is that they do not depend on the specific details of attacks, so they are more reliable when you are unknown about the potential attacking patterns in practice. In comparison, there is an implicit assumption in AT methods that the attacking patterns in training and test are similar. This is the reason why PGD-AT is not robust facing mask-based physical attacks or simple rotations.
So under the more completed and flexible physical attacks, a defense based on the AT framework like DOA may not be a good choice. Although AT methods are quite effective and widely studied under the l_p attacks, the authors are expected to consider more factors if they really want to design a robust system in the physical world, rather than just follow or apply the most popular pipeline like AT.
Reference:
[1] Engstrom et al. A rotation and a translation suffice: Fooling cnns with simple transformations. ICML 2019
[2] Raff et al. Barrage of Random Transforms for Adversarially Robust Defense. CVPR 2019 |
ICLR | Title
Fast Yet Effective Graph Unlearning through Influence Analysis
Abstract
Recent evolving data privacy policies and regulations have led to increasing interest in the problem of removing information from a machine learning model. In this paper, we consider Graph Neural Networks (GNNs) as the target model, and study the problem of edge unlearning in GNNs, i.e., learning a new GNN model as if a specified set of edges never existed in the training graph. Despite its practical importance, the problem remains elusive due to the non-convexity nature of GNNs and the large scale of the input graph. Our main technical contribution is three-fold: 1) we cast the problem of fast edge unlearning as estimating the influence of the edges to be removed and eliminating the estimated influence from the original model in one-shot; 2) we design a computationally and memory efficient algorithm named EraEdge for edge influence estimation and unlearning; 3) under standard regularity conditions, we prove that EraEdge converges to the desired model. A comprehensive set of experiments on four prominent GNN models and three benchmark graph datasets demonstrate that EraEdge achieves significant speedup gains over retraining from scratch without sacrificing the model accuracy too much. The speedup is even more outstanding on large graphs. Furthermore, EraEdge witnesses significantly higher model accuracy than the existing GNN unlearning approaches.
1 INTRODUCTION
Recent legislation such as the General Data Protection Regulation (GDPR) (Regulation, 2018), the California Consumer Privacy Act (CCPA) (Pardau, 2018), and the Personal Information Protection and Electronic Documents Act (PIPEDA) (Parliament, 2000) requires companies to remove private user data upon request. This has prompted the discussion of “right to be forgotten” (Kwak et al., 2017), which entitles users to get more control over their data by deleting it from learned models. In case a company has already used the data collected from users to train their machine learning (ML) models, these models need to be manipulated accordingly to reflect data deletion requests.
In this paper, we consider Graph Neural Networks (GNNs) that receive frequent edge removal requests as our target ML model. For example, consider a social network graph collected from an online social network platform that witnesses frequent insertion/deletion of users (nodes) and/or change of social relations between users (edges). Some of these structural changes can be accompanied with users’ withdrawal requests of their data. In this paper, we only consider the requests of removing social relations (edges). Then the owner of the platform is obligated by the laws to remove the effect of the requested edges, so that the GNN models trained on the graph do not “remember” their corresponding social interactions.
In general, a naive solution to deleting user data from a trained ML model is to retrain the model on the training data which excludes the samples to be removed. However, retraining a model from scratch can be prohibitively expensive, especially for complex ML models and large training data. To address this issue, numerous efforts (Mahadevan & Mathioudakis, 2021; Brophy & Lowd, 2021; Cauwenberghs & Poggio, 2000; Cao & Yang, 2015) have been spent on designing efficient unlearning methods that can remove the effect of some particular data samples without model retraining. One of the main challenges is how to estimate the effects of a given training sample on model parameters (Golatkar et al., 2021), which has led to research focusing on simpler convex learning problem such a linear/logistic regression (Mahadevan & Mathioudakis, 2021), random forests (Brophy & Lowd, 2021), support vector machines (Cauwenberghs & Poggio, 2000) and k-means clustering (Ginart et al., 2019), for which a theoretical analysis was established. Although there have
been some works on unlearning in deep neural networks (Golatkar et al., 2020a;b; 2021; Guo et al., 2020), very few works (Chen et al., 2022; Chien et al., 2022) have investigated efficient unlearning in GNNs. These works can be distinguished into two categories: exact and approximate GNN unlearning. GraphEraser (Chen et al., 2022) is an exact unlearning method that retrains the GNN model on the graph that excludes the to-be-removed edges in an efficient way. It follows the basic idea of Sharded, Isolated, Sliced, and Aggregated (SISA) method (Bourtoule et al., 2021) and splits the training graph into several disjoint shards and train each shard model separately. Upon receiving an unlearning request, the model provider retrains only the affected shard model. Despite its efficiency, partitioning training data into disjoint shards severely damages the graph structure and thus incurs significant loss of target model accuracy (will be shown in our empirical evaluation). On the other hand, approximate GNN unlearning returns a sanitized GNN model which is statistically indistinguishable from the retrained model. Certified graph unlearning (Chien et al., 2022) can provide a theoretical privacy guarantee of the approximate GNN unlearning. However, it only considers some simplified GNN architectures such as simple graph convolutions (SGC) and their generalized PageRank (GPR) extensions. We aim to design the efficient approximate unlearning solutions that are model-agnostic, i.e., without making any assumption of the nature and complexity of the model.
In this paper, we design an efficient edge unlearning algorithm named EraEdge which directly modifies the parameters of the pre-trained model in one shot to remove the influence of the requested edges from the model. By adapting the idea of treating removal of data points as upweighting these data points (Koh & Liang, 2017), we compute the influence of the requested edges on the model as the change in model parameters due to upweighting these edges. However, due to the aggregation function of GNN models, it is non-trivial to estimate the change on GNN parameters as removing an edge e(vi, vj) could affect not only the neighbors of vi and vj but also on multi-hops. Thus we design a new influence derivation method that takes the aggregation effect of GNN models into consideration when estimating the change in parameters. We address several theoretical and practical challenges of influence derivation due to the non-convexity nature of GNNs.
To demonstrate the efficiency and effectiveness of EraEdge, we systematically represent the empirical trade-off space between unlearning efficiency (i.e., the time performance of unlearning, model accuracy (i.e., the quality of the unlearned model), and unlearning efficacy (i.e., the extent to which the unlearned model has forgotten the removed edges). Our results show that, first, while achieving similar model accuracy and unlearning efficacy as the retrained model, EraEdge is significantly faster than retraining. For example, it speeds up the training time by 5.03× for GCN model on Cora dataset. The speedup is even more outstanding on larger graphs; it can be two orders of magnitude on CS graph which contains around 160K edges. Second, EraEdge outperforms GraphEraser (Chen et al., 2022) considerably in model accuracy. For example, EraEdge witnesses an increase of 50% in model accuracy on Cora dataset compared to GraphEraser. Furthermore, EraEdge is much faster than GraphEraser especially on large graphs. For instance, EraEdge is 5.8× faster than GraphEraser on CS dataset. Additionally, EraEdge outperforms certified graph unlearning (CGU) (Chien et al., 2022) significantly in terms of target model accuracy and unlearning efficacy, while it demonstrates comparable edge forgetting ability as CGU.
In summary, we made the following four main contributions: 1) We cast the problem of edge unlearning as estimating the influence of a set of edges on GNNs while taking the aggregation effects of GNN models into consideration; 2) We design EraEdge, a computationally and memory efficient algorithm that applies a one-shot update to the original model by removing the estimated influence of the removed edges from the model; 3) We address several theoretical and practical challenges of deriving edge influence, and prove that EraEdge converges to the desired model under standard regularity conditions; 4) We perform an extensive set of experiments on four prominent GNN models and three benchmark graph datasets, and demonstrate the efficiency and effectiveness of EraEdge.
2 GRAPH NEURAL NETWORK
Given a graph G(V,E) that consists of a set of nodes V and their edges E, the goal of a Graph Neural Network (GNN) model is to learn a representation vector ~h (embedding) for each node v in G that can be used in downstream tasks (e.g., node classification, link prediction).
A GNN model updates the node embeddings through aggregating its neighbors’ representations. The embedding corresponding to each node vi ∈ V at layer l is updated according to vi’s graph
neighborhood (typically 1-hop neighborhood). This update operation can be expressed as follows:
H(l+1) = σ(AGGREGATE(A,H(l), θ(l))), (1)
where σ is an activation function, A is the ajacency matrix of the given graph G, and θ(l) denotes the trainable parameters as layer l. The initial embeddings at ` = 0 are set to the input features for all the nodes, i.e., H(0) = X .
Different GNN models use different AGGREGATE functions. In this paper, we consider four representative GNN models, namely Graph Convolutional Networks (GCN) (Kipf & Welling, 2017), GraphSAGE (Hamilton et al., 2018), graph attention networks (GAT) (Veličković et al., 2018), and Graph Isomorphism Network (GIN) (Xu et al., 2019). These models differ on their AGGREGATE functions. We ignore the details of their AGGREGATE functions as our unlearning methods are model agnostic, and thus are independent from these functions.
After K iterations of message passing, a Readout function pools the node embeddings at the last layer and produce the final prediction results. The Readout function varies by the learning tasks. In this paper, we consider node classification as the learning task and the Readout function is a softmax function.
Ŷ = softmax(H(K)θ(K)). (2)
The final output of the target model for node v is a vector of probabilities, each corresponding to the predicted probability (or posterior) that v is assigned to a class. We consider cross entropy loss (Cox, 1958) which is the de-facto choice for classification tasks. In the following sections, we use L(θ; v,E) to denote the loss on node v for simplicity because only edges are directly manipulated.
3 FORMULATION OF EDGE UNLEARNING PROBLEM
Despite that GNNs are widely applicable to many fields, there are very few studies (Chen et al., 2022; Chien et al., 2022) on graph unlearning so far. In this section, we will formulate the definition of the edge unlearning problem. Table 1 lists the notations we use in the paper. In this paper, we only consider edge unlearning. We will discuss how to extend edge unlearning to handle node unlearning in Section 7.
Let G be the set of all graphs. In this paper, we only consider undirected graphs. Let Θ be the parameter space of the GNN models. A learning algorithm AL is a function that maps an instance G(V,E) ∈ G to a parameter θ ∈ Θ. Let θOR be the parameters of AL trained on G. Any user can submit an edge unlearning request to remove specific edges from G. In practice, unlearning requests are often submitted sequentially. For efficiency, we assume these requests are processed in a batch. As the response to these requests,AL has to erase the impacts of these edges and produce an unlearned model. A straightforward approach is to retrain the model on G(V,E\EUL) from scratch and obtain the model parameters θRE. However, due to the high computational cost of retraining, an alternative solution is to apply a unlearning process AUL that takes EUL and θOR as the input, and outputs the unlearned model.
The retrained and unlearned models should be sufficiently close and ideally identical. There are two types of notations in the literature that quantify the closeness of the retrained and unlearning models: (1) both retraining and unlearning models are indistinguishable in the parameter space, i.e., distributions of model parameters of both retraining and unlearning models are sufficiently
close, where the distance in two distributions can be measured by `2 distance (Wu et al., 2020) and KL divergence (Golatkar et al., 2020b); (2) both models are indistinguishable in the output space, i.e., distributions of the learning outputs by both models are sufficiently close, where the distance between two output distributions can be measured by either test accuracy (Thudi et al., 2021) or the privacy leakage of membership inference attack launched on model outputs (Graves et al., 2021; Baumhauer et al., 2020). We argue that indistinguishably of the parameter space is not suitable for GNNs, due to their non-convex loss functions (Tarun et al., 2021), as small changes of the training data can cause large changes in GNN parameters. Therefore, in this paper, we consider the indistinguishability of the output space between retrained and unlearned models as our unlearning notion. Formally, we define the edge unlearning problem as follows: Definition 1 (Edge Unlearning Problem). Given a graph G(V,E), a set of edges EUL ⊂ E that are requested to be removed fromG, a graph learning algorithmAL and its readout function f , then an edge unlearning algorithm AUL should satisfy the following:
P (f(θRE)|GUL) ≈ P (f(θUL)|GUL), (3) where GUL = G(V,E\EUL), and P (f(θ)|G) denotes the distribution of possible outputs of the model (with parameters θ) on G.
The readout function f varies for different learning tasks. In this paper, we consider the softmax function (Eqn. (2)) as the readout function. There are various choices to measure the similarity between the output softmax vectors. We consider Jensen–Shannon divergence (JSD) in our experiments.
4 MAIN ALGORITHM: EFFICIENT EDGE UNLEARNING
Given a graph G(V,E) as input, one often finds a proper model represented by θ that fits the data by minimizing an empirical loss. In this paper, we consider cross-entropy loss (Cox, 1958) for node classification as our loss function. The original model θOR is optimized by the following:
θOR = arg min θ
1 |V | ∑ v∈V L(θ; v,E). (4)
Assuming a set of edges EUL is deleted from G and the new graph after this deletion is represented by GUL = G(V,E\EUL), retraining the model will give us a new model parameter θRE on GUL:
θRE = arg min θ
1 |V | ∑ v∈V L(θ; v,E\EUL). (5)
Figure 1 gives an overview of our unlearning solution named EraEdge. A major difficulty, as expected, is that obtaining θRE is prohibitively slow for complex networks and large datasets. To overcome this difficulty, the aim of EraEdge is to identify an update to θOR through an analogous one-shot unlearning update:
θUL = θOR − IEUL , (6) where IEUL is the influence ofEUL on the target model, i.e., the change on the model parameters by EUL. In general, IEUL is aK×d matrix, where K is the number of parameters in θOR (and both θRE and θUL), and d is the dimension of each parameter (i.e. embedding). This update can be interpreted from
the optimization perspective that the model forgets EUL by “reversing” the influence of EUL from the model. The challenge is how to quantify IEUL to achieve the unlearning objective (Eqn. (3)). Next, we discuss the details of how to compute IEUL .
Existing influence functions and their inapplicability. Influence functions (Koh & Liang, 2017) enable efficient approximation of the effect of some particular training points on a model’s prediction. The general idea of influence functions is the following: let θ and θ̂ be the model parameters before and after removing a data point z, the new parameters θ̂ ,z after z is removed can be computed as following:
θ̂ ,z = arg min θ
1
m ∑ zi 6=z L(θ; zi) + L(θ; z), (7)
wherem is the number of data points in the original dataset, and is a small constant. Intuitively, the influence function computes the parameters after removal of z by upweighting z on the parameters with some small .
It seemly sounds that the influence function (Eqn. (7)) can be applied to the edge unlearning setting directly by upweighting those nodes that are included in any edge inEUL. However, this is incorrect as removing one edge e(vi, vj) from the graph can affect not only the prediction of vi and vj but also those of neighboring nodes of vi and vj , due to the aggregation function of GNN models.
4.1 THEORETICAL CHARACTERIZATION OF EDGE INFLUENCE ON GNNS
In general, an `-layer GNN aggregates the information of the `-hop neighborhood of each node. Thus removing an edge e(vi, vj) will affect not only vi and vj but also all nodes in the `-hop neighborhood of vi and vj . To capture such aggregation effect in derivation of edge influence, first, we define the set of nodes (denoted as Ve) that will be affected by removing an edge e(vi, vj) as: Ve = N (vi) ∪N (vj) ∪ {vi, vj}, where N (v) is the set of nodes connected to v in ` hops. Then given a set of edges EUL ⊂ E to be removed, the set of nodes VEUL that will be affected by removing EUL is defined as follows:
VEUL = ⋃
e∈EUL
Ve. (8)
Next, we follow the data perturbation idea of influence functions (Koh & Liang, 2017), and compute the new parameters θ ,EUL after the removal of EUL as follows:
θ ,VEUL = arg minθ
1 |V | ∑ v∈V L(θ; v,E) + ∑ v∈VEUL L(θ; v,E\EUL)− ∑ v∈VEUL L(θ; v,E). (9)
Intuitively, Eqn. (9) approximates the effects that moving mass of perturbation on VEUL with E\EUL in place of E. Then we obtain the following theorem. Theorem 2. Given the parameters θOR obtained by AUL on a graph G, and the loss function L, assume that L is twice-differentiable and convex in θ, then the influence of a set of edges EUL is:
IEUL = −H−1θOR ( ∇θ ∑ v∈VEUL L(θOR; v,E\EUL)−∇θ ∑ v∈VEUL L(θOR; v,E) )
(10)
where HOR := ∇2 1|V | ∑ v∈V L(θOR, v, E) is the Hessian matrix of L with respect to θOR.
The proof of Theorem 2 can be found in Appendix A.1. According to Eqn (9), removing EUL is equivalent to upweighting = 1|V | mass of perturbation. Therefore, θUL = θ ,VEUL when = 1 |V | . Finally, we have a linear approximation of θUL:
θUL ≈ θOR + 1
|V | IEUL .
Dealing with non-convexity of GNNs. Theorem 2 assumes the loss function is convex. Given the non-convexity nature of GNN models, it is hard to reach the global minimum in practice. As a result, the Hessian matrix HθOR may have negative eigenvalues. To address this issue, we adapt the damping term based solution (Koh & Liang, 2017) to prevent HθOR from having negative eigenvalues by adding a damping term to the Hessian matrix, i.e., (HθOR + λI).
4.2 TIME AND MEMORY EFFICIENT INFLUENCE ESTIMATOR
Although by Theorem 2 estimating the edge influence amounts to solving a linear system, there are several practical and theoretical challenges. First, it can well be the case that the Hessian matrix HθOR is non-invertible. This is because our loss function is non-convex with respect to θ. As a consequence, the linear system may even not have a solution. Second, even storing a Hessian matrix in memory (either CPU or GPU) is expensive: in our experiments, we will show that Hessian matrices are huge, e.g. the Hessian matrix on the Physics dataset has size around 106 × 106 which would cost 60 GB memory. Lastly, even under the promise that the linear system is feasible, computing the inverse of such a huge size matrix is prohibitive.
Our second technical contribution thus is an algorithm that resolves all the challenges mentioned above. Claim 3. There is a computationally and memory efficient algorithm to solve the linear system of IEUL in Theorem 2.
The starting point of our algorithm is a novel perspective that solving the linear system (Eqn. (10)) can be thought of as finding a stationary point of the following quadratic function:
f(x) = arg min x
1 2 xTAx− bTx, (11)
withA = HθOR and b = ∇θ ∑ v∈VEUL L(θOR; v,E\EUL)−∇θ ∑ v∈VEUL
L(θOR; v,E). Note that even the function f(x) is non-convex, there is rich literature establishing convergence guarantee to stationary points using gradient-descent-type algorithms; see e.g. (Bertsekas, 1999).
In this paper, we will employ the conjugate gradient (CG) method which exhibits promising computational efficiency for minimizing quadratic functions (Pytlak, 2008). In fact, it was well-known that as long as the step size satisfies the Wolfe conditions (Wolfe, 1969; 1971) and the objective function is Lipschitz and bounded from below, the sequence of iterates produced by CG asymptotically converges to a stationary point of f(x), which corresponds to a solution IEUL that satisfies Eqn. (10). Note that these regularity conditions are satisfied as soon as the training data are bounded. Hence, we have the following convergence guarantee. Lemma 4 (Theorem 2.1 of (Pytlak, 2008)). The CG method generates a sequence of iterates {xt}t≥1 such that limt→+∞ f(xt) = 0. In addition, the per-iteration time complexity is O(|x|) where|x| denotes the dimension of x.
We note, however, that an appealing feature of Eqn. (10) is that we do not have to find a solution with exact zero gradient. This enables us to terminate CG early by monitoring the magnitude of the gradients. Our empirical study also shows that CG can get good approximation in a small number of iterations.
In addition, we propose a memory-efficient implementation of CG, which significantly reduces the memory cost. Lemma 5. The CG method can be implemented using O(|θ|) memory.
Proof. To see why the above lemma holds, recall that a key step of CG update is calculating the gradient of f(x) as
∇f(x) = HθORx− ( ∇θ ∑ v∈VEUL L(θOR; v,E\EUL)−∇θ ∑ v∈VEUL L(θOR; v,E) ) .
As HθOR ∈ R|θ|×|θ|, we can not explicitly compute HθOR . Instead, we utilize Hessian-vector product (Pearlmutter, 1994) to approximately calculate HθORx by
HθORx ≈ g(θOR + rx)− g(θOR)
r , (12) for some very small step size r > 0, where g(θ) := ∇θ ∑ v∈VEUL
L(θOR; v,E\EUL) − ∇θ ∑ v∈VEUL
L(θOR; v,E). Note that the memory cost of evaluating the function value of g(·) is O(|θ|). Hence, Lemma 5 follows.
Remark 6. Observe that a trivial implementation involves storing the Hessian matrix which consumes O(|θ|2) memory. Returning to our previous example on the Physics dataset, a trivial implementation consumes 64 GB memory, while ours only needs 8 GB memory.
Proof of Claim 3. Claim 3 follows from Lemma 4 and Lemma 5.
5 EXPERIMENTS
In this section, we empirically verify the efficiency and effectiveness of our unlearning method.
5.1 EXPERIMENTAL SETUP
All the experiments are executed on a GPU server with NVIDIA A100 (40G). All the algorithms are implemented in Python with PyTorch. We set the damping term λ = 0.01 for all experiments. The link to the code and datasets will be available in the camera-ready version.
Datasets. We use three well-known datasets, namely Cora (Sen et al., 2008), Citeseer (Yang et al., 2016), and CS (Shchur et al., 2018), that are popularly used for performance evaluation of GNNs (Shchur et al., 2018; Zhang et al., 2019). The statistical information of these datasets can be found in Appendix A.2.
GNN models. We consider four representative GNN models, namely GCN (Kipf & Welling, 2017), GAT (Veličković et al., 2018), GraphSAGE (Hamilton et al., 2017), and GIN (Xu et al., 2019). We configure the GNNs with one hidden layer and a softmax output layer. All GNN models are trained for 1,000 epochs with an early-stopping condition when the validation loss is not decreasing for 20 epochs. We randomly split each graph into a training set (60%), a validation set (20%), and a test set (20%). As we mainly consider the impact of structure change on GNN models, we randomly initialize the values of node features such that they follow the Gaussian distribution to eliminate the possible dominant impact of node features on model performance. More details of the model setup can be found in Appendix A.2. We also measure the model performance with original node features. The results can be found in Appendix A.5.
Picking edges for removal. We randomly pick k ={100, 200, 400, 600, 800, 1,000}) edges from Cora and CiteSeer datasets, and k={1,000, 2,000, 4,000, 6,000, 8,000, 10,000}) edges from CS dataset for removal. For each setting, we randomly sample ten batches of edges, with each batch containing k edges. We report the average of model performance (model accuracy, unlearning efficacy, etc.) of the ten batches.
Metrics. We evaluate the performance of EraEdge in terms of efficiency, efficacy, and model accuracy: (1) Unlearning efficiency: we measure the running time of EraEdge and retraining time for a given set of edges; (2) Target model accuracy: we measure accuracy of node classification, i.e., the percentage of nodes that are correctly classified by the model, as the accuracy of the target model. Higher accuracy indicates better accuracy retained by the unlearned model; (3) Unlearning efficacy: we measure the distance between the output space of both retrained and unlearned models as the Jensen–Shannon divergence (JSD) between the posterior distributions output by these two models. Smaller JSD indicates a higher similarity between the two models in terms of their outputs.
Baselines. We consider both baselines of exact and approximate GNN unlearning for comparison with EraEdge. For exact GNN unlearning, we consider GraphEraser (Bourtoule et al., 2021) as the baseline. GraphEraser has two partitioning strategies denoted as balanced LPA (BLPA) and balanced embedding k-means (BEKM), We consider both BLPA and BEKM as the baseline methods. We use the same setting of number of shards as in (Chen et al., 2022) for both BLPA and BEKM. For approximate GNN unlearning, we consider (Chien et al., 2022) as the baseline.
5.2 PERFORMANCE OF ERAEDGE We evaluate the performance of EraEdge on four representative GNN models and three graph datasets, and compare the performance of the unlearned model with both the retrained model and two baselines in terms of model accuracy, unlearning efficiency, and unlearning efficacy.
Model accuracy. We report the results of GNN model accuracy in Table 2 (Accuracy column) for GCN+Cora and GraphSAGE+CS settings. The results for other settings can be found in Appendix A.3. We have the following observations. First, the model accuracy obtained by EraEdge stays very close to that of the retrained model, regardless of the number of the removed edges. The difference in model accuracy between retrained and unlearned models remains negligible (in range of [0.48%, 0.52%] and [0.01%, 0.2%] for the two settings respectively). Second, EraEdge witnesses significantly higher model accuracy compared to the two baseline approaches, especially for the GCN+Cora setting. For example, both BEKM and BLPA only can deliver the model accuracy as around 48% when removing 200 edges under the GCN+Cora setting. This shows that unlearning through graph partitioning can bring significant loss of target model accuracy. Meanwhile EraEdge demonstrates that the model accuracy can be as high as∼79% (65% improvement). Unlearning efficiency. We report the time performance results of EraEdge and retraining in Table 2 (Running time. column) for GCN+Cora and GraphSAGE+CS settings. The results of other settings can be found in Appendix A.3. We measure the running time of the two baselines as the average training time per shard, as all shards are trained in parallel. The most important observation is that EraEdge is significantly faster than retraining. For example, it speeds up the training time by 5× under GCN+Cora setting when removing 1,000 edges, and 77× under GraphSAGE+CS setting when removing 2,000 edges. Furthermore, EraEdge is much faster than the two baselines especially when training large graphs. For example, EraEdge is 5.8× faster than BLPA and 3.5× faster than BEKM under the GraphSAGE+CS setting when 2,000 edges were removed.
Unlearning efficacy. Figure 2 plots the results of unlearning efficacy which is measured as the JSD between the posterior probability output by both retraining and unlearning models. We observe that JSD remains insignificant (at most 0.02) in all the settings. Furthermore, JSD stays relatively stable when the number of removed edges increase. This demonstrates the efficacy of EraEdge - it remains close to the retraining model even when a large number of edges is removed.
Main takeaway. While demonstrating similar accuracy as retraining, EraEdge is significantly faster than retraining, where the speedup gain becomes more outstanding when more edges are
100 0.5913 0.5446 0.5297 0.6179 0.5615 0.5523
200 0.6014 0.5486 0.5471 0.5946 0.5659 0.5498
400 0.5978 0.5383 0.5378 0.5934 0.5400 0.5368
600 0.5993 0.5360 0.5383 0.6055 0.5471 0.5475
removed. Furthermore, EraEdge outperforms the baseline approaches considerably in both model accuracy and time performance.
5.3 TESTING OF EDGE FORGETTING THROUGH MEMBERSHIP INFERENCE ATTACKS
To empirically evaluate the extent to which the unlearned model has forgotten the removed edges, we launch a black-box edge membership inference attack (MIA) (Wu et al., 2022)1 that predicts whether particular edges exist in the training graph. We measure the attack performance as AUC of MIA. Intuitively, an AUC that is close to 50% indicates that MIA’s belief of edge existence is close to random guess.
Table 3 reports the attack performance of MIA’s inference of the removed edges EUL against both the original model and the retrained/unlearned models on Cora dataset. First, MIA is effective to predict the existence of EUL in the original graph, as the AUC of MIA against the original model is much higher than 0.5. Second, the ability of MIA inferring EUL from either the retrained or the unlearned model degrades, as the AUC of MIA on both retrained and unlearned models is noticeably reduced. Indeed, the AUC of MIA for both retrained and unlearned models remain close to each other. This demonstrates that the extent to which EraEdge forgets EUL is similar to that of the retrained model.
5.4 COMPARISON WITH CERTIFIED GRAPH UNLEARNING
In this part of the experiments, we compare the performance of EraEdge with certified graph unlearning (CGU) (Chien et al., 2022). The key idea of the certified unlearning method is to add noise drawn from the Gaussian distribution to the loss function. We use µ = 0 and σ = 1 as the mean and standard deviation of the Gaussian distribution. We compare CGU and EraEdge in terms of: (1) target model accuracy, (2) unlearning efficacy (measured as the JSD between the probability output
1We use the implementation of LinkTeller available at: https://github.com/AI-secure/LinkTeller.
100 0.5913 0.5446 0.5329 0.5297
200 0.6014 0.5486 0.5485 0.5471
400 0.5978 0.5383 0.5343 0.5378
600 0.5993 0.5360 0.5434 0.5383
of retraining and unlearning models), and (3) privacy vulnerability of the removed edges against the membership inference attack.
Figure 3 (a) reports the target model accuracy by CGU and EraEdge. As we can see, while EraEdge enjoys similar target model accuracy as the retrained model, CGU suffers from significant loss of model accuracy due to added noise, where the model accuracy is 50% worse than that of both EraEdge and retraining.
Figure 3 (b) reports unlearning efficacy by CGU and EraEdge. The results demonstrate that the model output by CGU is much farther away from that of the retrained model than EraEdge. This is consistent with the low accuracy results in Figure 3 (b).
Table 4 shows the ability of forgetting the removed edges EUL by both CGU and EraEdge, where the edge forgetting ability is measured as the accuracy (AUC) of the membership inference attack that predicts EUL in the training graph. We use the same membership inference attack (Wu et al., 2022) as in Section 5.3. The reported results are calculated as the average AUC of ten MIA trials. We observe that CGU and EraEdge has comparable edge forgetting ability, where MIA performance against both models is close. This demonstrate empirically that EraEdge provides similar privacy risks as CGU. As it has been shown above that the target model accuracy by EraEdge outperforms that of CGU significantly, we believe that EraEdge better addresses the trade-off between unlearning efficacy, privacy, and model accuracy.
6 RELATED WORK
Machine unlearning. Machine unlearning aims to remove some specific information from a pretrained ML model. Several attempts have been made to make unlearning more efficient than retraining from scratch. An earlier study converts ML algorithms to statistical query (SQ) learning, so that
unlearning processes only need to retrain the summation of SQ learning (Cao & Yang, 2015). The concept of SISA (sharded, isolated, sliced, and aggregated) approach is proposed recently (Bourtoule et al., 2021) where a set of constituent models, trained on disjoint data shards, are aggregated to form an ensemble model. Given an unlearning request, only the affected constituent model is retrained. Alternative machine unlearning solutions directly modify the model’s parameters to unlearn in a small number of updates (Guo et al., 2020; Neel et al., 2021; Sekhari et al., 2021). Recent studies have focused on various convex ML models including random forest (Brophy & Lowd, 2021; Schelter et al., 2021), k-means clustering (Ginart et al., 2019), and Bayesian inference models (Fu et al., 2021).
Machine unlearning in deep neural networks. Early work on deep machine unlearning focuses on removing the information from the network weights by imposing a condition of SGD based optimization during training (Golatkar et al., 2020a). The subsequent work (Golatkar et al., 2020b) estimates the network weights for the unlearned mode. However, all these methods suffer from high computational costs and constraints on the training process (Tarun et al., 2021). The amnesiac unlearning approach (Graves et al., 2021) focuses on Convolutional Neural Networks. It cancels parameter updates from only the batches containing the removed data. However, it assumes that the data to be removed is known before the training of the original model, which does not hold in our setting where edge removal requests are unknown and unpredictable. There also has been recent empirical and theoretical work in developing deep network unlearning in the application domain of computer vision (Du et al., 2019; Nguyen et al., 2020). GraphEraser (Chen et al., 2022) is one of the few works that consider unlearning in GNNs. It follows the SISA approach (Bourtoule et al., 2021) and splits graph into disjoint partitions (shards). Upon receiving an unlearning request, only the model on the affected shards is retrained. However, as splitting the training graph into disjoint partitions will damage the original graph structure, GraphEraser could downgrade the accuracy of the unlearned model significantly, especially when a large number of edges is to be removed. This has been demonstrated in our experiments.
Certified machine unlearning. Certified removal (Guo et al., 2020) defines approximate unlearning with a privacy guarantee (indistinguishability of unlearned models with retrained models), where indistinguishability is defined in a similar manner as differential privacy (Dwork et al., 2006). Certified removal can be realized by adding noise sampled from either Gaussian distribution or Laplace distribution on the weights (Golatkar et al., 2020a; Wu et al., 2020; Neel et al., 2021; Golatkar et al., 2021; Sekhari et al., 2021), or adding perturbation on the loss function (Guo et al., 2020). (Chien et al., 2022) provides the first certified GNN unlearning solution. It only considers simple graph convolutions (SGC) and their generalized PageRank (GPR) extensions. To achieve a theoretical guarantee for certified removal, it adds noise to the loss function. However, as shown in our empirical evaluation (Section 5), the certified unlearning leads to significant loss of target model due to the added noise.
Explanations of deep ML models by influence functions. One of the challenges of deep ML models is its non-transparency that hinders understanding of the prediction results. Recent works (Koh & Liang, 2017) adapt the concept of influence function - a classic technique from robust statistics — to formalize the impact of a training point on a prediction. Broadly speaking, the influence function attempts to estimate the change in the model’s predictions if a particular training point is removed. Very recently, the concept of influence function has been extended to GNNs. For instance, influence functions are designed for GNNs to measure feature-label influence and label influence (Wang et al., 2019). Node-pair influence, i.e., the change in the prediction for node u if the features of the other node v are reweighted, is also studied (Wu et al., 2022). Unlike these works, we estimate the edge influence, i.e., the effect of removing particular edges on GNN models.
7 CONCLUSION
In this work, we study the problem of edge unlearning that aims to remove a set of target edges from GNNs. We design an approximate unlearning algorithm named EraEdge which enables fast yet effective edge unlearning in GNNs. An extensive set of experiments on four representative GNN models and three benchmark graph datasets demonstrates that EraEdge can achieve significant speedup gains over retraining without sacrificing the model accuracy too much.
There are several research directions for the future work. First, while EraEdge only considers edge unlearning, it can be easily extended to handle node unlearning, as removing a node v from a graph
is equivalent to removing all the edges that connect with v in the graph. We will investigate the feasibility and performance of node unlearning through EraEdge, and compare the performance with the existing node unlearning methods (Chien et al., 2022). Second, an important metric of unlearning performance is unlearning capacity, i.e., the maximum number of edges that can be deleted while still ensuring good model accuracy. We will investigate how EraEdge can be tuned to meet the capacity requirement. Third, we will extend the study to a relevant topic, continual learning (CL), which studies how to learn from an infinite stream of data, so that the acquired knowledge can be used for future learning (Chen & Liu, 2018). An interesting question is how to support both continual learning (Chen & Liu, 2018) and private unlearning (CLPU) (Liu et al., 2022), i.e., the model learns and remembers permanently the data samples at large, and forgets specific samples completely and privately. We will explore how to extend EraEdge to support CLPU.
A APPENDIX
A.1 PROOF OF THEOREM 2
Proof. For simplicity, we first define R(θ, V,E) = ∑ v∈V L(θ, v, E). (13)
Then, we formulate a GNN learning process as
θOR = arg min θ
1
|V | R(θ, V,E). (14)
Since removing edges can be considered as perturbing the input, we introduce Eqn 9,
θ = arg min θ
1 |V | ∑ v∈V L(θ; v,E) + ∑
v∈VEUL
L(θ; v,E\EUL)− ∑
v∈VEUL
L(θ; v,E)
= arg min θ
1
|V | R(θ, V,E) + R(θ, VEUL , E\EUL)− R(θ, VEUL , E). (15)
We note a necessary condition is that the gradient of Eqn 15 at θ is zero. Then, we have
0 = 1
|V | ∇θR(θ , V, E) + ∇θR(θ , VEUL , E\EUL)− ∇θR(θ , VEUL , E). (16)
Next, we apply Taylor series at θOR and we get
0 ≈ 1 |V | ∇θR(θOR, V, E) + ∇θR(θOR, VEUL , E\EUL)− ∇θR(θOR, VEUL , E)
+ [ 1 |V | ∇2θR(θOR, V, E) + ∇2θR(θOR, VEUL , E\EUL)− ∇2θR(θOR, VEUL , E) ] (θ − θOR),
(17)
where we have dropped o(θOR − θ ) for approximation. Then Eqn (17) is a linear system of EUL, the influence of EUL. Since θOR is the minimum of Eqn (14), we have 1|V |∇R(θOR, V, E) = 0. As is a small value, we drop the two o( ) terms and have the following:
1
|V | ∇2θR(θOR, V, E)(θ −θOR)+
( ∇θR(θOR, VEUL , E\EUL)−∇θR(θOR, VEUL , E) ) ≈ 0. (18)
Suppose Eqn (14) is convex, then
θ −θOR ≈ − 1
|V | ∇2θR(θOR, V, E)−1
( ∇θR(θOR, VEUL , E\EUL)−∇θR(θOR, VEUL , E) ) (19)
Denote
IEUL := d(θ − θOR)
d
∣∣∣ =0 = −H−1θOR ( ∇θR(θOR, VEUL , E\EUL)−∇θR(θOR, VEUL , E) ) (20)
where HOR := ∇2 1|V | ∑ v∈V L(θOR, v, E).
A.2 ADDITIONAL DETAILS OF EXPERIMENTAL SETUP
Description of datasets. Table 5 summarizes the statistical information of the three graph datasets (Cora, Citeseer, and CS) we used in the experiments. Cora and Citeseer datasets are citation graphs, while CS dataset is a co-author graph.
Additional details of model setup. To ensure fair comparison between the retrained and unlearned models, we use the same model size (i.e., same number of layers and number of neurons) for both retraining and unlearned models. All GNN models are trained with a learning rate of 0.001. We train the models by 1,000 epochs, with the early-stopping condition as that the validation loss does not decrease for 20 epochs.
A.3 ADDITIONAL PERFORMANCE RESULTS
Model efficiency. Figure 4 presents the model efficiency results on the three datasets. We observe that EraEdge is significantly faster than retraining. For example, EraEdge outperforms by 9.95×, 5.41×, 69.36×, and 3.12× on CS dataset respectively over retraining (Figure 4 (c), (f), (i), and (l)). Model accuracy. Figure 5 presents the results of model accuracy for all settings. First, the accuracy of the target model by EraEdge is very close to that by the retrained model. In particular, the average difference in model accuracy between retrained and unlearned models are in the range of [0.11%, 0.68%], [0.02%, 0.74%], [0.06%, 0.65%] and [0.07%, 1.00%] for GCN, GAT, GraphSAGE, and GIN on Cora, [0.01%, 0.71%], [0.05%, 0.44%, [0.05%, 0.65%], and [0.02%, 1.25%] on CiteSeer, and [0.02%, 0.22%], [0.01%, 0.20%, [0.05%, 0.23%], and [0.01%, 0.22%] on CS, respectively. Furthermore, the model accuracy of the unlearned model stays close to that of the retrained model, regardless of the number of removed edges. This demonstrates that EraEdge can handle the removal of a large number of edges.
A.4 SEQUENTIAL UNLEARNING (NEW)
So far we only considered deleting of one batch of edges. In practice, there can be multiple batch deletion requests to forget the edges in a sequential fashion. Next, we focus on the scenario where multiple edge batches are removed sequentially. Specifically, we divide the to-be-removed EUL into k > 1 disjoint batches {Bi}ki=1, with each batch consisting of the same number of edges. For each batch Bi (1 ≤ i ≤ k − 1), we consider the target model obtained from retraining/unlearning of the previous batch Bi−1 as the original model θOR, and update θOR by removing Bi (either by retraining or unlearning). We evaluate the target model accuracy under sequential unlearning and compare it with that under one-batch unlearning.
We consider k = 4, and reports the target model accuracy for deletingEUL in one batch and deleting EUL in k = 4 batches in Table 6. We also report the target model accuracy of the retrained and unlearned models at each batch. We observe that, first, the accuracy of the unlearned model remains close to the retrained model at each batch during sequential removals. Second, the performance of the unlearned model after removing k batches stays close to that of the model after single-batch unlearning. These results demonstrate that EraEdge can handle sequential deletion of multiple batches of edges.
A.5 UNLEARNING WITH NODE FEATURES (NEW)
In Section 5 we mainly considered the node features that are randomly initialized to eliminate the possible dominant impact of node features on model performance. Next, we evaluate the perfor-
Retrain EraEdge
GCN
100 200 400 600 800 1000 Number of unlearned edges
0
1
2
3
4
5
Un le
ar ni
ng ti
m e
(s ec
on ds
)
(a) Cora
100 200 400 600 800 1000 Number of unlearned edges
0.0
0.5
1.0
1.5
2.0
2.5
3.0
Un le
ar ni
ng ti
m e
(s ec
on ds
)
(b) CiteSeer
1000 2000 4000 6000 8000 10000 Number of unlearned edges
0
5
10
15
20
Un le
ar ni
ng ti
m e
(s ec
on ds
)
(c) CS
GAT
100 200 400 600 800 1000 Number of unlearned edges
0
1
2
3
4
5
6
7
Un le
ar ni
ng ti
m e
(s ec
on ds
)
(d) Cora
100 200 400 600 800 1000 Number of unlearned edges
0
1
2
3
4
5
Un le
ar ni
ng ti
m e
(s ec
on ds
)
(e) CiteSeer
1000 2000 4000 6000 8000 10000 Number of unlearned edges
0
5
10
15
20
Un le
ar ni
ng ti
m e
(s ec
on ds
)
(f) CS
SAGE
100 200 400 600 800 1000 Number of unlearned edges
0
2
4
6
8
10
12
Un le
ar ni
ng ti
m e
(s ec
on ds
)
(g) Cora
100 200 400 600 800 1000 Number of unlearned edges
0
2
4
6
8
Un le
ar ni
ng ti
m e
(s ec
on ds
)
(h) CiteSeer
1000 2000 4000 6000 8000 10000 Number of unlearned edges
0
10
20
30
40
50
60
Un le
ar ni
ng ti
m e
(s ec
on ds
)
(i) CS
GIN
Retrain EraEdge
GCN
0 100 200 400 600 800 1000 Number of unlearned edges
0.75
0.76
0.77
0.78
0.79
M od
el a
cc ur
ac y
(a) Cora
0 100 200 400 600 800 1000 Number of unlearned edges
0.54
0.55
0.56
0.57
0.58
0.59
0.60
M od
el a
cc ur
ac y
(b) CiteSeer
0 1000 2000 4000 6000 8000 10000 Number of unlearned edges
0.8550
0.8575
0.8600
0.8625
0.8650
0.8675
0.8700
0.8725
M od
el a
cc ur
ac y
(c) CS
GAT
0 100 200 400 600 800 1000 Number of unlearned edges
0.745
0.750
0.755
0.760
0.765
0.770
M od
el a
cc ur
ac y
(d) Cora
0 100 200 400 600 800 1000 Number of unlearned edges
0.56
0.57
0.58
0.59
0.60
0.61
M od
el a
cc ur
ac y
(e) CiteSeer
0 1000 2000 4000 6000 8000 10000 Number of unlearned edges
0.8475
0.8500
0.8525
0.8550
0.8575
0.8600
0.8625
0.8650
M od
el a
cc ur
ac y
(f) CS
GraphSAGE
0 100 200 400 600 800 1000 Number of unlearned edges
0.76
0.77
0.78
0.79
M od
el a
cc ur
ac y
(g) Cora
0 100 200 400 600 800 1000 Number of unlearned edges
0.55
0.56
0.57
0.58
0.59
0.60
0.61
M od
el a
cc ur
ac y
(h) CiteSeer
0 1000 2000 4000 6000 8000 10000 Number of unlearned edges
0.8550 0.8575 0.8600 0.8625 0.8650 0.8675 0.8700 0.8725 0.8750 M od el a cc ur ac y
(i) CS
GIN
mance of EraEdge that uses the original node features. Figure 6 reports the target model accuracy of the unlearned model that is trained with or without the node features. We have the following main observations. First, the target model accuracy improves significantly by considering the original node features. This shows that the node features have dominant importance on the target model performance in this setting. However, the target model accuracy of the unlearned model still stays close to that of the retrained model. In other words, EraEdge still can make GNNs forget the edges effectively even when node features have dominant importance over the graph structure on model performance. | 1. What is the focus and contribution of the paper on machine unlearning in graph neural networks?
2. What are the strengths of the proposed approach, particularly in terms of efficiency and effectiveness?
3. Do you have any concerns or questions regarding the paper's methodology or its claims?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper studies machine unlearning on graph neural networks (GNNs) by analyzing influence function. The authors identify that simply applying influence function on GNNs for edge removal is problematic due to node dependency. As such, the authors propose to estimate the influence function by upweighting the set of all affected nodes. Then the influence is obtained as the reverse of Hessian matrix multiplied by the gradient vector. And conjugate gradient is applied to reduce the computational cost. Experimental results demonstrate the effectiveness of the proposed EraEdge in terms of the indistinguishability betweem model parameters and the efficiency of the proposed EraEdge in terms of running time.
Strengths And Weaknesses
Strengths:
Unlearning on GNNs is less studied, and there is no existing works on influence function-based GNN unlearning.
The paper is overall easy to follow.
The experimental results demonstrate that EraEdge can efficiently unlearn a set of edges via the indistinguishability between the retrained model parameters and the unlearned model parameters.
Concerns/Questions:
Many existing influence function-based approximate unlearning techniques (see references below) add Gaussian noises to preserve data privacy. I am wondering why such noise is not needed in EraEdge for data privacy? [1] https://arxiv.org/abs/1911.04933 [2] https://arxiv.org/abs/1911.03030 [3] https://arxiv.org/abs/2106.04378 [4] https://arxiv.org/abs/2006.14755 [5] https://arxiv.org/abs/2007.02923 [6] https://arxiv.org/abs/2012.13431 [7] https://arxiv.org/abs/2110.11891 [8] https://arxiv.org/abs/2103.03279
Existing works often solve Eq. (10)-like equation with Hessian-vector product (HVP). What is the key limitation of using HVP to solve Eq. (10)? If HVP can be used as well, what is the benefit of conjugate gradient over HVP?
I doubt that the indistinguishability in the GNN output necessarily mean the data is removed. Similar concern also appear in (Guo et al. 2020) even for linear and convex classifier like logistic regression, not to mention the nonconvexity of GNNs. It is better to study the privacy of this data removal mechanism like how they are studied in differential privacy (e.g., through membership inference attack).
How would EraEdge perform when we sequantially delete multiple batches of edges? In many real-world scenarios, it is uncommon that the model will be only unlearned once.
Clarity, Quality, Novelty And Reproducibility
The paper is overall novel and easy to follow. |
ICLR | Title
Fast Yet Effective Graph Unlearning through Influence Analysis
Abstract
Recent evolving data privacy policies and regulations have led to increasing interest in the problem of removing information from a machine learning model. In this paper, we consider Graph Neural Networks (GNNs) as the target model, and study the problem of edge unlearning in GNNs, i.e., learning a new GNN model as if a specified set of edges never existed in the training graph. Despite its practical importance, the problem remains elusive due to the non-convexity nature of GNNs and the large scale of the input graph. Our main technical contribution is three-fold: 1) we cast the problem of fast edge unlearning as estimating the influence of the edges to be removed and eliminating the estimated influence from the original model in one-shot; 2) we design a computationally and memory efficient algorithm named EraEdge for edge influence estimation and unlearning; 3) under standard regularity conditions, we prove that EraEdge converges to the desired model. A comprehensive set of experiments on four prominent GNN models and three benchmark graph datasets demonstrate that EraEdge achieves significant speedup gains over retraining from scratch without sacrificing the model accuracy too much. The speedup is even more outstanding on large graphs. Furthermore, EraEdge witnesses significantly higher model accuracy than the existing GNN unlearning approaches.
1 INTRODUCTION
Recent legislation such as the General Data Protection Regulation (GDPR) (Regulation, 2018), the California Consumer Privacy Act (CCPA) (Pardau, 2018), and the Personal Information Protection and Electronic Documents Act (PIPEDA) (Parliament, 2000) requires companies to remove private user data upon request. This has prompted the discussion of “right to be forgotten” (Kwak et al., 2017), which entitles users to get more control over their data by deleting it from learned models. In case a company has already used the data collected from users to train their machine learning (ML) models, these models need to be manipulated accordingly to reflect data deletion requests.
In this paper, we consider Graph Neural Networks (GNNs) that receive frequent edge removal requests as our target ML model. For example, consider a social network graph collected from an online social network platform that witnesses frequent insertion/deletion of users (nodes) and/or change of social relations between users (edges). Some of these structural changes can be accompanied with users’ withdrawal requests of their data. In this paper, we only consider the requests of removing social relations (edges). Then the owner of the platform is obligated by the laws to remove the effect of the requested edges, so that the GNN models trained on the graph do not “remember” their corresponding social interactions.
In general, a naive solution to deleting user data from a trained ML model is to retrain the model on the training data which excludes the samples to be removed. However, retraining a model from scratch can be prohibitively expensive, especially for complex ML models and large training data. To address this issue, numerous efforts (Mahadevan & Mathioudakis, 2021; Brophy & Lowd, 2021; Cauwenberghs & Poggio, 2000; Cao & Yang, 2015) have been spent on designing efficient unlearning methods that can remove the effect of some particular data samples without model retraining. One of the main challenges is how to estimate the effects of a given training sample on model parameters (Golatkar et al., 2021), which has led to research focusing on simpler convex learning problem such a linear/logistic regression (Mahadevan & Mathioudakis, 2021), random forests (Brophy & Lowd, 2021), support vector machines (Cauwenberghs & Poggio, 2000) and k-means clustering (Ginart et al., 2019), for which a theoretical analysis was established. Although there have
been some works on unlearning in deep neural networks (Golatkar et al., 2020a;b; 2021; Guo et al., 2020), very few works (Chen et al., 2022; Chien et al., 2022) have investigated efficient unlearning in GNNs. These works can be distinguished into two categories: exact and approximate GNN unlearning. GraphEraser (Chen et al., 2022) is an exact unlearning method that retrains the GNN model on the graph that excludes the to-be-removed edges in an efficient way. It follows the basic idea of Sharded, Isolated, Sliced, and Aggregated (SISA) method (Bourtoule et al., 2021) and splits the training graph into several disjoint shards and train each shard model separately. Upon receiving an unlearning request, the model provider retrains only the affected shard model. Despite its efficiency, partitioning training data into disjoint shards severely damages the graph structure and thus incurs significant loss of target model accuracy (will be shown in our empirical evaluation). On the other hand, approximate GNN unlearning returns a sanitized GNN model which is statistically indistinguishable from the retrained model. Certified graph unlearning (Chien et al., 2022) can provide a theoretical privacy guarantee of the approximate GNN unlearning. However, it only considers some simplified GNN architectures such as simple graph convolutions (SGC) and their generalized PageRank (GPR) extensions. We aim to design the efficient approximate unlearning solutions that are model-agnostic, i.e., without making any assumption of the nature and complexity of the model.
In this paper, we design an efficient edge unlearning algorithm named EraEdge which directly modifies the parameters of the pre-trained model in one shot to remove the influence of the requested edges from the model. By adapting the idea of treating removal of data points as upweighting these data points (Koh & Liang, 2017), we compute the influence of the requested edges on the model as the change in model parameters due to upweighting these edges. However, due to the aggregation function of GNN models, it is non-trivial to estimate the change on GNN parameters as removing an edge e(vi, vj) could affect not only the neighbors of vi and vj but also on multi-hops. Thus we design a new influence derivation method that takes the aggregation effect of GNN models into consideration when estimating the change in parameters. We address several theoretical and practical challenges of influence derivation due to the non-convexity nature of GNNs.
To demonstrate the efficiency and effectiveness of EraEdge, we systematically represent the empirical trade-off space between unlearning efficiency (i.e., the time performance of unlearning, model accuracy (i.e., the quality of the unlearned model), and unlearning efficacy (i.e., the extent to which the unlearned model has forgotten the removed edges). Our results show that, first, while achieving similar model accuracy and unlearning efficacy as the retrained model, EraEdge is significantly faster than retraining. For example, it speeds up the training time by 5.03× for GCN model on Cora dataset. The speedup is even more outstanding on larger graphs; it can be two orders of magnitude on CS graph which contains around 160K edges. Second, EraEdge outperforms GraphEraser (Chen et al., 2022) considerably in model accuracy. For example, EraEdge witnesses an increase of 50% in model accuracy on Cora dataset compared to GraphEraser. Furthermore, EraEdge is much faster than GraphEraser especially on large graphs. For instance, EraEdge is 5.8× faster than GraphEraser on CS dataset. Additionally, EraEdge outperforms certified graph unlearning (CGU) (Chien et al., 2022) significantly in terms of target model accuracy and unlearning efficacy, while it demonstrates comparable edge forgetting ability as CGU.
In summary, we made the following four main contributions: 1) We cast the problem of edge unlearning as estimating the influence of a set of edges on GNNs while taking the aggregation effects of GNN models into consideration; 2) We design EraEdge, a computationally and memory efficient algorithm that applies a one-shot update to the original model by removing the estimated influence of the removed edges from the model; 3) We address several theoretical and practical challenges of deriving edge influence, and prove that EraEdge converges to the desired model under standard regularity conditions; 4) We perform an extensive set of experiments on four prominent GNN models and three benchmark graph datasets, and demonstrate the efficiency and effectiveness of EraEdge.
2 GRAPH NEURAL NETWORK
Given a graph G(V,E) that consists of a set of nodes V and their edges E, the goal of a Graph Neural Network (GNN) model is to learn a representation vector ~h (embedding) for each node v in G that can be used in downstream tasks (e.g., node classification, link prediction).
A GNN model updates the node embeddings through aggregating its neighbors’ representations. The embedding corresponding to each node vi ∈ V at layer l is updated according to vi’s graph
neighborhood (typically 1-hop neighborhood). This update operation can be expressed as follows:
H(l+1) = σ(AGGREGATE(A,H(l), θ(l))), (1)
where σ is an activation function, A is the ajacency matrix of the given graph G, and θ(l) denotes the trainable parameters as layer l. The initial embeddings at ` = 0 are set to the input features for all the nodes, i.e., H(0) = X .
Different GNN models use different AGGREGATE functions. In this paper, we consider four representative GNN models, namely Graph Convolutional Networks (GCN) (Kipf & Welling, 2017), GraphSAGE (Hamilton et al., 2018), graph attention networks (GAT) (Veličković et al., 2018), and Graph Isomorphism Network (GIN) (Xu et al., 2019). These models differ on their AGGREGATE functions. We ignore the details of their AGGREGATE functions as our unlearning methods are model agnostic, and thus are independent from these functions.
After K iterations of message passing, a Readout function pools the node embeddings at the last layer and produce the final prediction results. The Readout function varies by the learning tasks. In this paper, we consider node classification as the learning task and the Readout function is a softmax function.
Ŷ = softmax(H(K)θ(K)). (2)
The final output of the target model for node v is a vector of probabilities, each corresponding to the predicted probability (or posterior) that v is assigned to a class. We consider cross entropy loss (Cox, 1958) which is the de-facto choice for classification tasks. In the following sections, we use L(θ; v,E) to denote the loss on node v for simplicity because only edges are directly manipulated.
3 FORMULATION OF EDGE UNLEARNING PROBLEM
Despite that GNNs are widely applicable to many fields, there are very few studies (Chen et al., 2022; Chien et al., 2022) on graph unlearning so far. In this section, we will formulate the definition of the edge unlearning problem. Table 1 lists the notations we use in the paper. In this paper, we only consider edge unlearning. We will discuss how to extend edge unlearning to handle node unlearning in Section 7.
Let G be the set of all graphs. In this paper, we only consider undirected graphs. Let Θ be the parameter space of the GNN models. A learning algorithm AL is a function that maps an instance G(V,E) ∈ G to a parameter θ ∈ Θ. Let θOR be the parameters of AL trained on G. Any user can submit an edge unlearning request to remove specific edges from G. In practice, unlearning requests are often submitted sequentially. For efficiency, we assume these requests are processed in a batch. As the response to these requests,AL has to erase the impacts of these edges and produce an unlearned model. A straightforward approach is to retrain the model on G(V,E\EUL) from scratch and obtain the model parameters θRE. However, due to the high computational cost of retraining, an alternative solution is to apply a unlearning process AUL that takes EUL and θOR as the input, and outputs the unlearned model.
The retrained and unlearned models should be sufficiently close and ideally identical. There are two types of notations in the literature that quantify the closeness of the retrained and unlearning models: (1) both retraining and unlearning models are indistinguishable in the parameter space, i.e., distributions of model parameters of both retraining and unlearning models are sufficiently
close, where the distance in two distributions can be measured by `2 distance (Wu et al., 2020) and KL divergence (Golatkar et al., 2020b); (2) both models are indistinguishable in the output space, i.e., distributions of the learning outputs by both models are sufficiently close, where the distance between two output distributions can be measured by either test accuracy (Thudi et al., 2021) or the privacy leakage of membership inference attack launched on model outputs (Graves et al., 2021; Baumhauer et al., 2020). We argue that indistinguishably of the parameter space is not suitable for GNNs, due to their non-convex loss functions (Tarun et al., 2021), as small changes of the training data can cause large changes in GNN parameters. Therefore, in this paper, we consider the indistinguishability of the output space between retrained and unlearned models as our unlearning notion. Formally, we define the edge unlearning problem as follows: Definition 1 (Edge Unlearning Problem). Given a graph G(V,E), a set of edges EUL ⊂ E that are requested to be removed fromG, a graph learning algorithmAL and its readout function f , then an edge unlearning algorithm AUL should satisfy the following:
P (f(θRE)|GUL) ≈ P (f(θUL)|GUL), (3) where GUL = G(V,E\EUL), and P (f(θ)|G) denotes the distribution of possible outputs of the model (with parameters θ) on G.
The readout function f varies for different learning tasks. In this paper, we consider the softmax function (Eqn. (2)) as the readout function. There are various choices to measure the similarity between the output softmax vectors. We consider Jensen–Shannon divergence (JSD) in our experiments.
4 MAIN ALGORITHM: EFFICIENT EDGE UNLEARNING
Given a graph G(V,E) as input, one often finds a proper model represented by θ that fits the data by minimizing an empirical loss. In this paper, we consider cross-entropy loss (Cox, 1958) for node classification as our loss function. The original model θOR is optimized by the following:
θOR = arg min θ
1 |V | ∑ v∈V L(θ; v,E). (4)
Assuming a set of edges EUL is deleted from G and the new graph after this deletion is represented by GUL = G(V,E\EUL), retraining the model will give us a new model parameter θRE on GUL:
θRE = arg min θ
1 |V | ∑ v∈V L(θ; v,E\EUL). (5)
Figure 1 gives an overview of our unlearning solution named EraEdge. A major difficulty, as expected, is that obtaining θRE is prohibitively slow for complex networks and large datasets. To overcome this difficulty, the aim of EraEdge is to identify an update to θOR through an analogous one-shot unlearning update:
θUL = θOR − IEUL , (6) where IEUL is the influence ofEUL on the target model, i.e., the change on the model parameters by EUL. In general, IEUL is aK×d matrix, where K is the number of parameters in θOR (and both θRE and θUL), and d is the dimension of each parameter (i.e. embedding). This update can be interpreted from
the optimization perspective that the model forgets EUL by “reversing” the influence of EUL from the model. The challenge is how to quantify IEUL to achieve the unlearning objective (Eqn. (3)). Next, we discuss the details of how to compute IEUL .
Existing influence functions and their inapplicability. Influence functions (Koh & Liang, 2017) enable efficient approximation of the effect of some particular training points on a model’s prediction. The general idea of influence functions is the following: let θ and θ̂ be the model parameters before and after removing a data point z, the new parameters θ̂ ,z after z is removed can be computed as following:
θ̂ ,z = arg min θ
1
m ∑ zi 6=z L(θ; zi) + L(θ; z), (7)
wherem is the number of data points in the original dataset, and is a small constant. Intuitively, the influence function computes the parameters after removal of z by upweighting z on the parameters with some small .
It seemly sounds that the influence function (Eqn. (7)) can be applied to the edge unlearning setting directly by upweighting those nodes that are included in any edge inEUL. However, this is incorrect as removing one edge e(vi, vj) from the graph can affect not only the prediction of vi and vj but also those of neighboring nodes of vi and vj , due to the aggregation function of GNN models.
4.1 THEORETICAL CHARACTERIZATION OF EDGE INFLUENCE ON GNNS
In general, an `-layer GNN aggregates the information of the `-hop neighborhood of each node. Thus removing an edge e(vi, vj) will affect not only vi and vj but also all nodes in the `-hop neighborhood of vi and vj . To capture such aggregation effect in derivation of edge influence, first, we define the set of nodes (denoted as Ve) that will be affected by removing an edge e(vi, vj) as: Ve = N (vi) ∪N (vj) ∪ {vi, vj}, where N (v) is the set of nodes connected to v in ` hops. Then given a set of edges EUL ⊂ E to be removed, the set of nodes VEUL that will be affected by removing EUL is defined as follows:
VEUL = ⋃
e∈EUL
Ve. (8)
Next, we follow the data perturbation idea of influence functions (Koh & Liang, 2017), and compute the new parameters θ ,EUL after the removal of EUL as follows:
θ ,VEUL = arg minθ
1 |V | ∑ v∈V L(θ; v,E) + ∑ v∈VEUL L(θ; v,E\EUL)− ∑ v∈VEUL L(θ; v,E). (9)
Intuitively, Eqn. (9) approximates the effects that moving mass of perturbation on VEUL with E\EUL in place of E. Then we obtain the following theorem. Theorem 2. Given the parameters θOR obtained by AUL on a graph G, and the loss function L, assume that L is twice-differentiable and convex in θ, then the influence of a set of edges EUL is:
IEUL = −H−1θOR ( ∇θ ∑ v∈VEUL L(θOR; v,E\EUL)−∇θ ∑ v∈VEUL L(θOR; v,E) )
(10)
where HOR := ∇2 1|V | ∑ v∈V L(θOR, v, E) is the Hessian matrix of L with respect to θOR.
The proof of Theorem 2 can be found in Appendix A.1. According to Eqn (9), removing EUL is equivalent to upweighting = 1|V | mass of perturbation. Therefore, θUL = θ ,VEUL when = 1 |V | . Finally, we have a linear approximation of θUL:
θUL ≈ θOR + 1
|V | IEUL .
Dealing with non-convexity of GNNs. Theorem 2 assumes the loss function is convex. Given the non-convexity nature of GNN models, it is hard to reach the global minimum in practice. As a result, the Hessian matrix HθOR may have negative eigenvalues. To address this issue, we adapt the damping term based solution (Koh & Liang, 2017) to prevent HθOR from having negative eigenvalues by adding a damping term to the Hessian matrix, i.e., (HθOR + λI).
4.2 TIME AND MEMORY EFFICIENT INFLUENCE ESTIMATOR
Although by Theorem 2 estimating the edge influence amounts to solving a linear system, there are several practical and theoretical challenges. First, it can well be the case that the Hessian matrix HθOR is non-invertible. This is because our loss function is non-convex with respect to θ. As a consequence, the linear system may even not have a solution. Second, even storing a Hessian matrix in memory (either CPU or GPU) is expensive: in our experiments, we will show that Hessian matrices are huge, e.g. the Hessian matrix on the Physics dataset has size around 106 × 106 which would cost 60 GB memory. Lastly, even under the promise that the linear system is feasible, computing the inverse of such a huge size matrix is prohibitive.
Our second technical contribution thus is an algorithm that resolves all the challenges mentioned above. Claim 3. There is a computationally and memory efficient algorithm to solve the linear system of IEUL in Theorem 2.
The starting point of our algorithm is a novel perspective that solving the linear system (Eqn. (10)) can be thought of as finding a stationary point of the following quadratic function:
f(x) = arg min x
1 2 xTAx− bTx, (11)
withA = HθOR and b = ∇θ ∑ v∈VEUL L(θOR; v,E\EUL)−∇θ ∑ v∈VEUL
L(θOR; v,E). Note that even the function f(x) is non-convex, there is rich literature establishing convergence guarantee to stationary points using gradient-descent-type algorithms; see e.g. (Bertsekas, 1999).
In this paper, we will employ the conjugate gradient (CG) method which exhibits promising computational efficiency for minimizing quadratic functions (Pytlak, 2008). In fact, it was well-known that as long as the step size satisfies the Wolfe conditions (Wolfe, 1969; 1971) and the objective function is Lipschitz and bounded from below, the sequence of iterates produced by CG asymptotically converges to a stationary point of f(x), which corresponds to a solution IEUL that satisfies Eqn. (10). Note that these regularity conditions are satisfied as soon as the training data are bounded. Hence, we have the following convergence guarantee. Lemma 4 (Theorem 2.1 of (Pytlak, 2008)). The CG method generates a sequence of iterates {xt}t≥1 such that limt→+∞ f(xt) = 0. In addition, the per-iteration time complexity is O(|x|) where|x| denotes the dimension of x.
We note, however, that an appealing feature of Eqn. (10) is that we do not have to find a solution with exact zero gradient. This enables us to terminate CG early by monitoring the magnitude of the gradients. Our empirical study also shows that CG can get good approximation in a small number of iterations.
In addition, we propose a memory-efficient implementation of CG, which significantly reduces the memory cost. Lemma 5. The CG method can be implemented using O(|θ|) memory.
Proof. To see why the above lemma holds, recall that a key step of CG update is calculating the gradient of f(x) as
∇f(x) = HθORx− ( ∇θ ∑ v∈VEUL L(θOR; v,E\EUL)−∇θ ∑ v∈VEUL L(θOR; v,E) ) .
As HθOR ∈ R|θ|×|θ|, we can not explicitly compute HθOR . Instead, we utilize Hessian-vector product (Pearlmutter, 1994) to approximately calculate HθORx by
HθORx ≈ g(θOR + rx)− g(θOR)
r , (12) for some very small step size r > 0, where g(θ) := ∇θ ∑ v∈VEUL
L(θOR; v,E\EUL) − ∇θ ∑ v∈VEUL
L(θOR; v,E). Note that the memory cost of evaluating the function value of g(·) is O(|θ|). Hence, Lemma 5 follows.
Remark 6. Observe that a trivial implementation involves storing the Hessian matrix which consumes O(|θ|2) memory. Returning to our previous example on the Physics dataset, a trivial implementation consumes 64 GB memory, while ours only needs 8 GB memory.
Proof of Claim 3. Claim 3 follows from Lemma 4 and Lemma 5.
5 EXPERIMENTS
In this section, we empirically verify the efficiency and effectiveness of our unlearning method.
5.1 EXPERIMENTAL SETUP
All the experiments are executed on a GPU server with NVIDIA A100 (40G). All the algorithms are implemented in Python with PyTorch. We set the damping term λ = 0.01 for all experiments. The link to the code and datasets will be available in the camera-ready version.
Datasets. We use three well-known datasets, namely Cora (Sen et al., 2008), Citeseer (Yang et al., 2016), and CS (Shchur et al., 2018), that are popularly used for performance evaluation of GNNs (Shchur et al., 2018; Zhang et al., 2019). The statistical information of these datasets can be found in Appendix A.2.
GNN models. We consider four representative GNN models, namely GCN (Kipf & Welling, 2017), GAT (Veličković et al., 2018), GraphSAGE (Hamilton et al., 2017), and GIN (Xu et al., 2019). We configure the GNNs with one hidden layer and a softmax output layer. All GNN models are trained for 1,000 epochs with an early-stopping condition when the validation loss is not decreasing for 20 epochs. We randomly split each graph into a training set (60%), a validation set (20%), and a test set (20%). As we mainly consider the impact of structure change on GNN models, we randomly initialize the values of node features such that they follow the Gaussian distribution to eliminate the possible dominant impact of node features on model performance. More details of the model setup can be found in Appendix A.2. We also measure the model performance with original node features. The results can be found in Appendix A.5.
Picking edges for removal. We randomly pick k ={100, 200, 400, 600, 800, 1,000}) edges from Cora and CiteSeer datasets, and k={1,000, 2,000, 4,000, 6,000, 8,000, 10,000}) edges from CS dataset for removal. For each setting, we randomly sample ten batches of edges, with each batch containing k edges. We report the average of model performance (model accuracy, unlearning efficacy, etc.) of the ten batches.
Metrics. We evaluate the performance of EraEdge in terms of efficiency, efficacy, and model accuracy: (1) Unlearning efficiency: we measure the running time of EraEdge and retraining time for a given set of edges; (2) Target model accuracy: we measure accuracy of node classification, i.e., the percentage of nodes that are correctly classified by the model, as the accuracy of the target model. Higher accuracy indicates better accuracy retained by the unlearned model; (3) Unlearning efficacy: we measure the distance between the output space of both retrained and unlearned models as the Jensen–Shannon divergence (JSD) between the posterior distributions output by these two models. Smaller JSD indicates a higher similarity between the two models in terms of their outputs.
Baselines. We consider both baselines of exact and approximate GNN unlearning for comparison with EraEdge. For exact GNN unlearning, we consider GraphEraser (Bourtoule et al., 2021) as the baseline. GraphEraser has two partitioning strategies denoted as balanced LPA (BLPA) and balanced embedding k-means (BEKM), We consider both BLPA and BEKM as the baseline methods. We use the same setting of number of shards as in (Chen et al., 2022) for both BLPA and BEKM. For approximate GNN unlearning, we consider (Chien et al., 2022) as the baseline.
5.2 PERFORMANCE OF ERAEDGE We evaluate the performance of EraEdge on four representative GNN models and three graph datasets, and compare the performance of the unlearned model with both the retrained model and two baselines in terms of model accuracy, unlearning efficiency, and unlearning efficacy.
Model accuracy. We report the results of GNN model accuracy in Table 2 (Accuracy column) for GCN+Cora and GraphSAGE+CS settings. The results for other settings can be found in Appendix A.3. We have the following observations. First, the model accuracy obtained by EraEdge stays very close to that of the retrained model, regardless of the number of the removed edges. The difference in model accuracy between retrained and unlearned models remains negligible (in range of [0.48%, 0.52%] and [0.01%, 0.2%] for the two settings respectively). Second, EraEdge witnesses significantly higher model accuracy compared to the two baseline approaches, especially for the GCN+Cora setting. For example, both BEKM and BLPA only can deliver the model accuracy as around 48% when removing 200 edges under the GCN+Cora setting. This shows that unlearning through graph partitioning can bring significant loss of target model accuracy. Meanwhile EraEdge demonstrates that the model accuracy can be as high as∼79% (65% improvement). Unlearning efficiency. We report the time performance results of EraEdge and retraining in Table 2 (Running time. column) for GCN+Cora and GraphSAGE+CS settings. The results of other settings can be found in Appendix A.3. We measure the running time of the two baselines as the average training time per shard, as all shards are trained in parallel. The most important observation is that EraEdge is significantly faster than retraining. For example, it speeds up the training time by 5× under GCN+Cora setting when removing 1,000 edges, and 77× under GraphSAGE+CS setting when removing 2,000 edges. Furthermore, EraEdge is much faster than the two baselines especially when training large graphs. For example, EraEdge is 5.8× faster than BLPA and 3.5× faster than BEKM under the GraphSAGE+CS setting when 2,000 edges were removed.
Unlearning efficacy. Figure 2 plots the results of unlearning efficacy which is measured as the JSD between the posterior probability output by both retraining and unlearning models. We observe that JSD remains insignificant (at most 0.02) in all the settings. Furthermore, JSD stays relatively stable when the number of removed edges increase. This demonstrates the efficacy of EraEdge - it remains close to the retraining model even when a large number of edges is removed.
Main takeaway. While demonstrating similar accuracy as retraining, EraEdge is significantly faster than retraining, where the speedup gain becomes more outstanding when more edges are
100 0.5913 0.5446 0.5297 0.6179 0.5615 0.5523
200 0.6014 0.5486 0.5471 0.5946 0.5659 0.5498
400 0.5978 0.5383 0.5378 0.5934 0.5400 0.5368
600 0.5993 0.5360 0.5383 0.6055 0.5471 0.5475
removed. Furthermore, EraEdge outperforms the baseline approaches considerably in both model accuracy and time performance.
5.3 TESTING OF EDGE FORGETTING THROUGH MEMBERSHIP INFERENCE ATTACKS
To empirically evaluate the extent to which the unlearned model has forgotten the removed edges, we launch a black-box edge membership inference attack (MIA) (Wu et al., 2022)1 that predicts whether particular edges exist in the training graph. We measure the attack performance as AUC of MIA. Intuitively, an AUC that is close to 50% indicates that MIA’s belief of edge existence is close to random guess.
Table 3 reports the attack performance of MIA’s inference of the removed edges EUL against both the original model and the retrained/unlearned models on Cora dataset. First, MIA is effective to predict the existence of EUL in the original graph, as the AUC of MIA against the original model is much higher than 0.5. Second, the ability of MIA inferring EUL from either the retrained or the unlearned model degrades, as the AUC of MIA on both retrained and unlearned models is noticeably reduced. Indeed, the AUC of MIA for both retrained and unlearned models remain close to each other. This demonstrates that the extent to which EraEdge forgets EUL is similar to that of the retrained model.
5.4 COMPARISON WITH CERTIFIED GRAPH UNLEARNING
In this part of the experiments, we compare the performance of EraEdge with certified graph unlearning (CGU) (Chien et al., 2022). The key idea of the certified unlearning method is to add noise drawn from the Gaussian distribution to the loss function. We use µ = 0 and σ = 1 as the mean and standard deviation of the Gaussian distribution. We compare CGU and EraEdge in terms of: (1) target model accuracy, (2) unlearning efficacy (measured as the JSD between the probability output
1We use the implementation of LinkTeller available at: https://github.com/AI-secure/LinkTeller.
100 0.5913 0.5446 0.5329 0.5297
200 0.6014 0.5486 0.5485 0.5471
400 0.5978 0.5383 0.5343 0.5378
600 0.5993 0.5360 0.5434 0.5383
of retraining and unlearning models), and (3) privacy vulnerability of the removed edges against the membership inference attack.
Figure 3 (a) reports the target model accuracy by CGU and EraEdge. As we can see, while EraEdge enjoys similar target model accuracy as the retrained model, CGU suffers from significant loss of model accuracy due to added noise, where the model accuracy is 50% worse than that of both EraEdge and retraining.
Figure 3 (b) reports unlearning efficacy by CGU and EraEdge. The results demonstrate that the model output by CGU is much farther away from that of the retrained model than EraEdge. This is consistent with the low accuracy results in Figure 3 (b).
Table 4 shows the ability of forgetting the removed edges EUL by both CGU and EraEdge, where the edge forgetting ability is measured as the accuracy (AUC) of the membership inference attack that predicts EUL in the training graph. We use the same membership inference attack (Wu et al., 2022) as in Section 5.3. The reported results are calculated as the average AUC of ten MIA trials. We observe that CGU and EraEdge has comparable edge forgetting ability, where MIA performance against both models is close. This demonstrate empirically that EraEdge provides similar privacy risks as CGU. As it has been shown above that the target model accuracy by EraEdge outperforms that of CGU significantly, we believe that EraEdge better addresses the trade-off between unlearning efficacy, privacy, and model accuracy.
6 RELATED WORK
Machine unlearning. Machine unlearning aims to remove some specific information from a pretrained ML model. Several attempts have been made to make unlearning more efficient than retraining from scratch. An earlier study converts ML algorithms to statistical query (SQ) learning, so that
unlearning processes only need to retrain the summation of SQ learning (Cao & Yang, 2015). The concept of SISA (sharded, isolated, sliced, and aggregated) approach is proposed recently (Bourtoule et al., 2021) where a set of constituent models, trained on disjoint data shards, are aggregated to form an ensemble model. Given an unlearning request, only the affected constituent model is retrained. Alternative machine unlearning solutions directly modify the model’s parameters to unlearn in a small number of updates (Guo et al., 2020; Neel et al., 2021; Sekhari et al., 2021). Recent studies have focused on various convex ML models including random forest (Brophy & Lowd, 2021; Schelter et al., 2021), k-means clustering (Ginart et al., 2019), and Bayesian inference models (Fu et al., 2021).
Machine unlearning in deep neural networks. Early work on deep machine unlearning focuses on removing the information from the network weights by imposing a condition of SGD based optimization during training (Golatkar et al., 2020a). The subsequent work (Golatkar et al., 2020b) estimates the network weights for the unlearned mode. However, all these methods suffer from high computational costs and constraints on the training process (Tarun et al., 2021). The amnesiac unlearning approach (Graves et al., 2021) focuses on Convolutional Neural Networks. It cancels parameter updates from only the batches containing the removed data. However, it assumes that the data to be removed is known before the training of the original model, which does not hold in our setting where edge removal requests are unknown and unpredictable. There also has been recent empirical and theoretical work in developing deep network unlearning in the application domain of computer vision (Du et al., 2019; Nguyen et al., 2020). GraphEraser (Chen et al., 2022) is one of the few works that consider unlearning in GNNs. It follows the SISA approach (Bourtoule et al., 2021) and splits graph into disjoint partitions (shards). Upon receiving an unlearning request, only the model on the affected shards is retrained. However, as splitting the training graph into disjoint partitions will damage the original graph structure, GraphEraser could downgrade the accuracy of the unlearned model significantly, especially when a large number of edges is to be removed. This has been demonstrated in our experiments.
Certified machine unlearning. Certified removal (Guo et al., 2020) defines approximate unlearning with a privacy guarantee (indistinguishability of unlearned models with retrained models), where indistinguishability is defined in a similar manner as differential privacy (Dwork et al., 2006). Certified removal can be realized by adding noise sampled from either Gaussian distribution or Laplace distribution on the weights (Golatkar et al., 2020a; Wu et al., 2020; Neel et al., 2021; Golatkar et al., 2021; Sekhari et al., 2021), or adding perturbation on the loss function (Guo et al., 2020). (Chien et al., 2022) provides the first certified GNN unlearning solution. It only considers simple graph convolutions (SGC) and their generalized PageRank (GPR) extensions. To achieve a theoretical guarantee for certified removal, it adds noise to the loss function. However, as shown in our empirical evaluation (Section 5), the certified unlearning leads to significant loss of target model due to the added noise.
Explanations of deep ML models by influence functions. One of the challenges of deep ML models is its non-transparency that hinders understanding of the prediction results. Recent works (Koh & Liang, 2017) adapt the concept of influence function - a classic technique from robust statistics — to formalize the impact of a training point on a prediction. Broadly speaking, the influence function attempts to estimate the change in the model’s predictions if a particular training point is removed. Very recently, the concept of influence function has been extended to GNNs. For instance, influence functions are designed for GNNs to measure feature-label influence and label influence (Wang et al., 2019). Node-pair influence, i.e., the change in the prediction for node u if the features of the other node v are reweighted, is also studied (Wu et al., 2022). Unlike these works, we estimate the edge influence, i.e., the effect of removing particular edges on GNN models.
7 CONCLUSION
In this work, we study the problem of edge unlearning that aims to remove a set of target edges from GNNs. We design an approximate unlearning algorithm named EraEdge which enables fast yet effective edge unlearning in GNNs. An extensive set of experiments on four representative GNN models and three benchmark graph datasets demonstrates that EraEdge can achieve significant speedup gains over retraining without sacrificing the model accuracy too much.
There are several research directions for the future work. First, while EraEdge only considers edge unlearning, it can be easily extended to handle node unlearning, as removing a node v from a graph
is equivalent to removing all the edges that connect with v in the graph. We will investigate the feasibility and performance of node unlearning through EraEdge, and compare the performance with the existing node unlearning methods (Chien et al., 2022). Second, an important metric of unlearning performance is unlearning capacity, i.e., the maximum number of edges that can be deleted while still ensuring good model accuracy. We will investigate how EraEdge can be tuned to meet the capacity requirement. Third, we will extend the study to a relevant topic, continual learning (CL), which studies how to learn from an infinite stream of data, so that the acquired knowledge can be used for future learning (Chen & Liu, 2018). An interesting question is how to support both continual learning (Chen & Liu, 2018) and private unlearning (CLPU) (Liu et al., 2022), i.e., the model learns and remembers permanently the data samples at large, and forgets specific samples completely and privately. We will explore how to extend EraEdge to support CLPU.
A APPENDIX
A.1 PROOF OF THEOREM 2
Proof. For simplicity, we first define R(θ, V,E) = ∑ v∈V L(θ, v, E). (13)
Then, we formulate a GNN learning process as
θOR = arg min θ
1
|V | R(θ, V,E). (14)
Since removing edges can be considered as perturbing the input, we introduce Eqn 9,
θ = arg min θ
1 |V | ∑ v∈V L(θ; v,E) + ∑
v∈VEUL
L(θ; v,E\EUL)− ∑
v∈VEUL
L(θ; v,E)
= arg min θ
1
|V | R(θ, V,E) + R(θ, VEUL , E\EUL)− R(θ, VEUL , E). (15)
We note a necessary condition is that the gradient of Eqn 15 at θ is zero. Then, we have
0 = 1
|V | ∇θR(θ , V, E) + ∇θR(θ , VEUL , E\EUL)− ∇θR(θ , VEUL , E). (16)
Next, we apply Taylor series at θOR and we get
0 ≈ 1 |V | ∇θR(θOR, V, E) + ∇θR(θOR, VEUL , E\EUL)− ∇θR(θOR, VEUL , E)
+ [ 1 |V | ∇2θR(θOR, V, E) + ∇2θR(θOR, VEUL , E\EUL)− ∇2θR(θOR, VEUL , E) ] (θ − θOR),
(17)
where we have dropped o(θOR − θ ) for approximation. Then Eqn (17) is a linear system of EUL, the influence of EUL. Since θOR is the minimum of Eqn (14), we have 1|V |∇R(θOR, V, E) = 0. As is a small value, we drop the two o( ) terms and have the following:
1
|V | ∇2θR(θOR, V, E)(θ −θOR)+
( ∇θR(θOR, VEUL , E\EUL)−∇θR(θOR, VEUL , E) ) ≈ 0. (18)
Suppose Eqn (14) is convex, then
θ −θOR ≈ − 1
|V | ∇2θR(θOR, V, E)−1
( ∇θR(θOR, VEUL , E\EUL)−∇θR(θOR, VEUL , E) ) (19)
Denote
IEUL := d(θ − θOR)
d
∣∣∣ =0 = −H−1θOR ( ∇θR(θOR, VEUL , E\EUL)−∇θR(θOR, VEUL , E) ) (20)
where HOR := ∇2 1|V | ∑ v∈V L(θOR, v, E).
A.2 ADDITIONAL DETAILS OF EXPERIMENTAL SETUP
Description of datasets. Table 5 summarizes the statistical information of the three graph datasets (Cora, Citeseer, and CS) we used in the experiments. Cora and Citeseer datasets are citation graphs, while CS dataset is a co-author graph.
Additional details of model setup. To ensure fair comparison between the retrained and unlearned models, we use the same model size (i.e., same number of layers and number of neurons) for both retraining and unlearned models. All GNN models are trained with a learning rate of 0.001. We train the models by 1,000 epochs, with the early-stopping condition as that the validation loss does not decrease for 20 epochs.
A.3 ADDITIONAL PERFORMANCE RESULTS
Model efficiency. Figure 4 presents the model efficiency results on the three datasets. We observe that EraEdge is significantly faster than retraining. For example, EraEdge outperforms by 9.95×, 5.41×, 69.36×, and 3.12× on CS dataset respectively over retraining (Figure 4 (c), (f), (i), and (l)). Model accuracy. Figure 5 presents the results of model accuracy for all settings. First, the accuracy of the target model by EraEdge is very close to that by the retrained model. In particular, the average difference in model accuracy between retrained and unlearned models are in the range of [0.11%, 0.68%], [0.02%, 0.74%], [0.06%, 0.65%] and [0.07%, 1.00%] for GCN, GAT, GraphSAGE, and GIN on Cora, [0.01%, 0.71%], [0.05%, 0.44%, [0.05%, 0.65%], and [0.02%, 1.25%] on CiteSeer, and [0.02%, 0.22%], [0.01%, 0.20%, [0.05%, 0.23%], and [0.01%, 0.22%] on CS, respectively. Furthermore, the model accuracy of the unlearned model stays close to that of the retrained model, regardless of the number of removed edges. This demonstrates that EraEdge can handle the removal of a large number of edges.
A.4 SEQUENTIAL UNLEARNING (NEW)
So far we only considered deleting of one batch of edges. In practice, there can be multiple batch deletion requests to forget the edges in a sequential fashion. Next, we focus on the scenario where multiple edge batches are removed sequentially. Specifically, we divide the to-be-removed EUL into k > 1 disjoint batches {Bi}ki=1, with each batch consisting of the same number of edges. For each batch Bi (1 ≤ i ≤ k − 1), we consider the target model obtained from retraining/unlearning of the previous batch Bi−1 as the original model θOR, and update θOR by removing Bi (either by retraining or unlearning). We evaluate the target model accuracy under sequential unlearning and compare it with that under one-batch unlearning.
We consider k = 4, and reports the target model accuracy for deletingEUL in one batch and deleting EUL in k = 4 batches in Table 6. We also report the target model accuracy of the retrained and unlearned models at each batch. We observe that, first, the accuracy of the unlearned model remains close to the retrained model at each batch during sequential removals. Second, the performance of the unlearned model after removing k batches stays close to that of the model after single-batch unlearning. These results demonstrate that EraEdge can handle sequential deletion of multiple batches of edges.
A.5 UNLEARNING WITH NODE FEATURES (NEW)
In Section 5 we mainly considered the node features that are randomly initialized to eliminate the possible dominant impact of node features on model performance. Next, we evaluate the perfor-
Retrain EraEdge
GCN
100 200 400 600 800 1000 Number of unlearned edges
0
1
2
3
4
5
Un le
ar ni
ng ti
m e
(s ec
on ds
)
(a) Cora
100 200 400 600 800 1000 Number of unlearned edges
0.0
0.5
1.0
1.5
2.0
2.5
3.0
Un le
ar ni
ng ti
m e
(s ec
on ds
)
(b) CiteSeer
1000 2000 4000 6000 8000 10000 Number of unlearned edges
0
5
10
15
20
Un le
ar ni
ng ti
m e
(s ec
on ds
)
(c) CS
GAT
100 200 400 600 800 1000 Number of unlearned edges
0
1
2
3
4
5
6
7
Un le
ar ni
ng ti
m e
(s ec
on ds
)
(d) Cora
100 200 400 600 800 1000 Number of unlearned edges
0
1
2
3
4
5
Un le
ar ni
ng ti
m e
(s ec
on ds
)
(e) CiteSeer
1000 2000 4000 6000 8000 10000 Number of unlearned edges
0
5
10
15
20
Un le
ar ni
ng ti
m e
(s ec
on ds
)
(f) CS
SAGE
100 200 400 600 800 1000 Number of unlearned edges
0
2
4
6
8
10
12
Un le
ar ni
ng ti
m e
(s ec
on ds
)
(g) Cora
100 200 400 600 800 1000 Number of unlearned edges
0
2
4
6
8
Un le
ar ni
ng ti
m e
(s ec
on ds
)
(h) CiteSeer
1000 2000 4000 6000 8000 10000 Number of unlearned edges
0
10
20
30
40
50
60
Un le
ar ni
ng ti
m e
(s ec
on ds
)
(i) CS
GIN
Retrain EraEdge
GCN
0 100 200 400 600 800 1000 Number of unlearned edges
0.75
0.76
0.77
0.78
0.79
M od
el a
cc ur
ac y
(a) Cora
0 100 200 400 600 800 1000 Number of unlearned edges
0.54
0.55
0.56
0.57
0.58
0.59
0.60
M od
el a
cc ur
ac y
(b) CiteSeer
0 1000 2000 4000 6000 8000 10000 Number of unlearned edges
0.8550
0.8575
0.8600
0.8625
0.8650
0.8675
0.8700
0.8725
M od
el a
cc ur
ac y
(c) CS
GAT
0 100 200 400 600 800 1000 Number of unlearned edges
0.745
0.750
0.755
0.760
0.765
0.770
M od
el a
cc ur
ac y
(d) Cora
0 100 200 400 600 800 1000 Number of unlearned edges
0.56
0.57
0.58
0.59
0.60
0.61
M od
el a
cc ur
ac y
(e) CiteSeer
0 1000 2000 4000 6000 8000 10000 Number of unlearned edges
0.8475
0.8500
0.8525
0.8550
0.8575
0.8600
0.8625
0.8650
M od
el a
cc ur
ac y
(f) CS
GraphSAGE
0 100 200 400 600 800 1000 Number of unlearned edges
0.76
0.77
0.78
0.79
M od
el a
cc ur
ac y
(g) Cora
0 100 200 400 600 800 1000 Number of unlearned edges
0.55
0.56
0.57
0.58
0.59
0.60
0.61
M od
el a
cc ur
ac y
(h) CiteSeer
0 1000 2000 4000 6000 8000 10000 Number of unlearned edges
0.8550 0.8575 0.8600 0.8625 0.8650 0.8675 0.8700 0.8725 0.8750 M od el a cc ur ac y
(i) CS
GIN
mance of EraEdge that uses the original node features. Figure 6 reports the target model accuracy of the unlearned model that is trained with or without the node features. We have the following main observations. First, the target model accuracy improves significantly by considering the original node features. This shows that the node features have dominant importance on the target model performance in this setting. However, the target model accuracy of the unlearned model still stays close to that of the retrained model. In other words, EraEdge still can make GNNs forget the edges effectively even when node features have dominant importance over the graph structure on model performance. | 1. What is the focus of the paper regarding graph neural networks?
2. What are the strengths and weaknesses of the proposed unlearning algorithm for graph neural networks?
3. Are there any concerns or limitations regarding the method's applicability, computational efficiency, and memory efficiency?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any suggestions for improving the paper, such as adding comparisons with other baselines or demonstrating the algorithm's efficacy further? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
An unlearning algorithm for graph neural networks is proposed in the paper. The paper tries to find the difference between the upweighted model and the original model by utilizing a hessian-based approximation, which can be solved by investigating the corresponding linear system with conjugate gradient methods. The found difference is then added to the original model to obtain the retrained parameters.
Strengths And Weaknesses
Strength
The proposed method has a good applicability and can work with most existing variants of GNNs.
The paper is a computationally and memory efficient algorithm.
The algorithm is evaluated on a number of three real datasets to demonstrate the effectiveness of the proposed approach for graph unlearning.
Weakness
Upweighting using the influence function merely approximates the retrained parameters and provides no theoretical error bound on non-convex loss functions. This work then constructs another approximation of this already fuzzy target to obtain the unlearnt model. One can hardly be convinced that such a method will result in a set of parameters that resemble the retrained model. The proposed scheme also provides no means of generating a verification of data removal, which is utterly vital for a data provider.
Adding a scaled identity matrix to the hessian to make it positive definite only accounts for the non-invertible problem. It still destroys the basis of Theorem 2, which requires the loss function to be globally convex. This assumption is too strong to make any meaningful sense in real-world scenarios. Also, Theorem 2 seems to be only an adaptation in notations of eq.(3) in [1].
In scenarios where the graph is relatively dense and the GCN is deep (e.g., as described in [2]), the affected set of nodes, as defined in the paper, can easily become the whole graph, which sort of destroys the purpose of the paper, which is fast unlearning.
Unlearning efficacy is only evaluated for the proposed model without comparison with other baselines, unlike classification accuracy. Also, experiments like the behavior difference between the unlearning model and the retrained model on the unlearned part of the dataset should be added to demonstrate the algorithm's efficacy further.
[1] Pang Wei Koh and Percy Liang. “Understanding Black-box Predictions via Influence Functions.” In: Proceedings of the 34th International Conference on Machine Learning, ICML 2017, Sydney, NSW, Australia, 6-11 August 2017. Ed. by Doina Precup and Yee Whye Teh. Vol. 70. Proceedings of Machine Learning Research. PMLR, 2017, pp. 1885–1894.
[2] Guohao Li et al. “DeepGCNs: Can GCNs Go As Deep As CNNs?” In: 2019 IEEE/CVF International Conference on Computer Vision, ICCV 2019, Seoul, Korea (South), October 27 - November 2, 2019. IEEE, 2019, pp. 9266–9275.
Clarity, Quality, Novelty And Reproducibility
It is understandable to a large extent, but parts of the paper need more work. |
ICLR | Title
Fast Yet Effective Graph Unlearning through Influence Analysis
Abstract
Recent evolving data privacy policies and regulations have led to increasing interest in the problem of removing information from a machine learning model. In this paper, we consider Graph Neural Networks (GNNs) as the target model, and study the problem of edge unlearning in GNNs, i.e., learning a new GNN model as if a specified set of edges never existed in the training graph. Despite its practical importance, the problem remains elusive due to the non-convexity nature of GNNs and the large scale of the input graph. Our main technical contribution is three-fold: 1) we cast the problem of fast edge unlearning as estimating the influence of the edges to be removed and eliminating the estimated influence from the original model in one-shot; 2) we design a computationally and memory efficient algorithm named EraEdge for edge influence estimation and unlearning; 3) under standard regularity conditions, we prove that EraEdge converges to the desired model. A comprehensive set of experiments on four prominent GNN models and three benchmark graph datasets demonstrate that EraEdge achieves significant speedup gains over retraining from scratch without sacrificing the model accuracy too much. The speedup is even more outstanding on large graphs. Furthermore, EraEdge witnesses significantly higher model accuracy than the existing GNN unlearning approaches.
1 INTRODUCTION
Recent legislation such as the General Data Protection Regulation (GDPR) (Regulation, 2018), the California Consumer Privacy Act (CCPA) (Pardau, 2018), and the Personal Information Protection and Electronic Documents Act (PIPEDA) (Parliament, 2000) requires companies to remove private user data upon request. This has prompted the discussion of “right to be forgotten” (Kwak et al., 2017), which entitles users to get more control over their data by deleting it from learned models. In case a company has already used the data collected from users to train their machine learning (ML) models, these models need to be manipulated accordingly to reflect data deletion requests.
In this paper, we consider Graph Neural Networks (GNNs) that receive frequent edge removal requests as our target ML model. For example, consider a social network graph collected from an online social network platform that witnesses frequent insertion/deletion of users (nodes) and/or change of social relations between users (edges). Some of these structural changes can be accompanied with users’ withdrawal requests of their data. In this paper, we only consider the requests of removing social relations (edges). Then the owner of the platform is obligated by the laws to remove the effect of the requested edges, so that the GNN models trained on the graph do not “remember” their corresponding social interactions.
In general, a naive solution to deleting user data from a trained ML model is to retrain the model on the training data which excludes the samples to be removed. However, retraining a model from scratch can be prohibitively expensive, especially for complex ML models and large training data. To address this issue, numerous efforts (Mahadevan & Mathioudakis, 2021; Brophy & Lowd, 2021; Cauwenberghs & Poggio, 2000; Cao & Yang, 2015) have been spent on designing efficient unlearning methods that can remove the effect of some particular data samples without model retraining. One of the main challenges is how to estimate the effects of a given training sample on model parameters (Golatkar et al., 2021), which has led to research focusing on simpler convex learning problem such a linear/logistic regression (Mahadevan & Mathioudakis, 2021), random forests (Brophy & Lowd, 2021), support vector machines (Cauwenberghs & Poggio, 2000) and k-means clustering (Ginart et al., 2019), for which a theoretical analysis was established. Although there have
been some works on unlearning in deep neural networks (Golatkar et al., 2020a;b; 2021; Guo et al., 2020), very few works (Chen et al., 2022; Chien et al., 2022) have investigated efficient unlearning in GNNs. These works can be distinguished into two categories: exact and approximate GNN unlearning. GraphEraser (Chen et al., 2022) is an exact unlearning method that retrains the GNN model on the graph that excludes the to-be-removed edges in an efficient way. It follows the basic idea of Sharded, Isolated, Sliced, and Aggregated (SISA) method (Bourtoule et al., 2021) and splits the training graph into several disjoint shards and train each shard model separately. Upon receiving an unlearning request, the model provider retrains only the affected shard model. Despite its efficiency, partitioning training data into disjoint shards severely damages the graph structure and thus incurs significant loss of target model accuracy (will be shown in our empirical evaluation). On the other hand, approximate GNN unlearning returns a sanitized GNN model which is statistically indistinguishable from the retrained model. Certified graph unlearning (Chien et al., 2022) can provide a theoretical privacy guarantee of the approximate GNN unlearning. However, it only considers some simplified GNN architectures such as simple graph convolutions (SGC) and their generalized PageRank (GPR) extensions. We aim to design the efficient approximate unlearning solutions that are model-agnostic, i.e., without making any assumption of the nature and complexity of the model.
In this paper, we design an efficient edge unlearning algorithm named EraEdge which directly modifies the parameters of the pre-trained model in one shot to remove the influence of the requested edges from the model. By adapting the idea of treating removal of data points as upweighting these data points (Koh & Liang, 2017), we compute the influence of the requested edges on the model as the change in model parameters due to upweighting these edges. However, due to the aggregation function of GNN models, it is non-trivial to estimate the change on GNN parameters as removing an edge e(vi, vj) could affect not only the neighbors of vi and vj but also on multi-hops. Thus we design a new influence derivation method that takes the aggregation effect of GNN models into consideration when estimating the change in parameters. We address several theoretical and practical challenges of influence derivation due to the non-convexity nature of GNNs.
To demonstrate the efficiency and effectiveness of EraEdge, we systematically represent the empirical trade-off space between unlearning efficiency (i.e., the time performance of unlearning, model accuracy (i.e., the quality of the unlearned model), and unlearning efficacy (i.e., the extent to which the unlearned model has forgotten the removed edges). Our results show that, first, while achieving similar model accuracy and unlearning efficacy as the retrained model, EraEdge is significantly faster than retraining. For example, it speeds up the training time by 5.03× for GCN model on Cora dataset. The speedup is even more outstanding on larger graphs; it can be two orders of magnitude on CS graph which contains around 160K edges. Second, EraEdge outperforms GraphEraser (Chen et al., 2022) considerably in model accuracy. For example, EraEdge witnesses an increase of 50% in model accuracy on Cora dataset compared to GraphEraser. Furthermore, EraEdge is much faster than GraphEraser especially on large graphs. For instance, EraEdge is 5.8× faster than GraphEraser on CS dataset. Additionally, EraEdge outperforms certified graph unlearning (CGU) (Chien et al., 2022) significantly in terms of target model accuracy and unlearning efficacy, while it demonstrates comparable edge forgetting ability as CGU.
In summary, we made the following four main contributions: 1) We cast the problem of edge unlearning as estimating the influence of a set of edges on GNNs while taking the aggregation effects of GNN models into consideration; 2) We design EraEdge, a computationally and memory efficient algorithm that applies a one-shot update to the original model by removing the estimated influence of the removed edges from the model; 3) We address several theoretical and practical challenges of deriving edge influence, and prove that EraEdge converges to the desired model under standard regularity conditions; 4) We perform an extensive set of experiments on four prominent GNN models and three benchmark graph datasets, and demonstrate the efficiency and effectiveness of EraEdge.
2 GRAPH NEURAL NETWORK
Given a graph G(V,E) that consists of a set of nodes V and their edges E, the goal of a Graph Neural Network (GNN) model is to learn a representation vector ~h (embedding) for each node v in G that can be used in downstream tasks (e.g., node classification, link prediction).
A GNN model updates the node embeddings through aggregating its neighbors’ representations. The embedding corresponding to each node vi ∈ V at layer l is updated according to vi’s graph
neighborhood (typically 1-hop neighborhood). This update operation can be expressed as follows:
H(l+1) = σ(AGGREGATE(A,H(l), θ(l))), (1)
where σ is an activation function, A is the ajacency matrix of the given graph G, and θ(l) denotes the trainable parameters as layer l. The initial embeddings at ` = 0 are set to the input features for all the nodes, i.e., H(0) = X .
Different GNN models use different AGGREGATE functions. In this paper, we consider four representative GNN models, namely Graph Convolutional Networks (GCN) (Kipf & Welling, 2017), GraphSAGE (Hamilton et al., 2018), graph attention networks (GAT) (Veličković et al., 2018), and Graph Isomorphism Network (GIN) (Xu et al., 2019). These models differ on their AGGREGATE functions. We ignore the details of their AGGREGATE functions as our unlearning methods are model agnostic, and thus are independent from these functions.
After K iterations of message passing, a Readout function pools the node embeddings at the last layer and produce the final prediction results. The Readout function varies by the learning tasks. In this paper, we consider node classification as the learning task and the Readout function is a softmax function.
Ŷ = softmax(H(K)θ(K)). (2)
The final output of the target model for node v is a vector of probabilities, each corresponding to the predicted probability (or posterior) that v is assigned to a class. We consider cross entropy loss (Cox, 1958) which is the de-facto choice for classification tasks. In the following sections, we use L(θ; v,E) to denote the loss on node v for simplicity because only edges are directly manipulated.
3 FORMULATION OF EDGE UNLEARNING PROBLEM
Despite that GNNs are widely applicable to many fields, there are very few studies (Chen et al., 2022; Chien et al., 2022) on graph unlearning so far. In this section, we will formulate the definition of the edge unlearning problem. Table 1 lists the notations we use in the paper. In this paper, we only consider edge unlearning. We will discuss how to extend edge unlearning to handle node unlearning in Section 7.
Let G be the set of all graphs. In this paper, we only consider undirected graphs. Let Θ be the parameter space of the GNN models. A learning algorithm AL is a function that maps an instance G(V,E) ∈ G to a parameter θ ∈ Θ. Let θOR be the parameters of AL trained on G. Any user can submit an edge unlearning request to remove specific edges from G. In practice, unlearning requests are often submitted sequentially. For efficiency, we assume these requests are processed in a batch. As the response to these requests,AL has to erase the impacts of these edges and produce an unlearned model. A straightforward approach is to retrain the model on G(V,E\EUL) from scratch and obtain the model parameters θRE. However, due to the high computational cost of retraining, an alternative solution is to apply a unlearning process AUL that takes EUL and θOR as the input, and outputs the unlearned model.
The retrained and unlearned models should be sufficiently close and ideally identical. There are two types of notations in the literature that quantify the closeness of the retrained and unlearning models: (1) both retraining and unlearning models are indistinguishable in the parameter space, i.e., distributions of model parameters of both retraining and unlearning models are sufficiently
close, where the distance in two distributions can be measured by `2 distance (Wu et al., 2020) and KL divergence (Golatkar et al., 2020b); (2) both models are indistinguishable in the output space, i.e., distributions of the learning outputs by both models are sufficiently close, where the distance between two output distributions can be measured by either test accuracy (Thudi et al., 2021) or the privacy leakage of membership inference attack launched on model outputs (Graves et al., 2021; Baumhauer et al., 2020). We argue that indistinguishably of the parameter space is not suitable for GNNs, due to their non-convex loss functions (Tarun et al., 2021), as small changes of the training data can cause large changes in GNN parameters. Therefore, in this paper, we consider the indistinguishability of the output space between retrained and unlearned models as our unlearning notion. Formally, we define the edge unlearning problem as follows: Definition 1 (Edge Unlearning Problem). Given a graph G(V,E), a set of edges EUL ⊂ E that are requested to be removed fromG, a graph learning algorithmAL and its readout function f , then an edge unlearning algorithm AUL should satisfy the following:
P (f(θRE)|GUL) ≈ P (f(θUL)|GUL), (3) where GUL = G(V,E\EUL), and P (f(θ)|G) denotes the distribution of possible outputs of the model (with parameters θ) on G.
The readout function f varies for different learning tasks. In this paper, we consider the softmax function (Eqn. (2)) as the readout function. There are various choices to measure the similarity between the output softmax vectors. We consider Jensen–Shannon divergence (JSD) in our experiments.
4 MAIN ALGORITHM: EFFICIENT EDGE UNLEARNING
Given a graph G(V,E) as input, one often finds a proper model represented by θ that fits the data by minimizing an empirical loss. In this paper, we consider cross-entropy loss (Cox, 1958) for node classification as our loss function. The original model θOR is optimized by the following:
θOR = arg min θ
1 |V | ∑ v∈V L(θ; v,E). (4)
Assuming a set of edges EUL is deleted from G and the new graph after this deletion is represented by GUL = G(V,E\EUL), retraining the model will give us a new model parameter θRE on GUL:
θRE = arg min θ
1 |V | ∑ v∈V L(θ; v,E\EUL). (5)
Figure 1 gives an overview of our unlearning solution named EraEdge. A major difficulty, as expected, is that obtaining θRE is prohibitively slow for complex networks and large datasets. To overcome this difficulty, the aim of EraEdge is to identify an update to θOR through an analogous one-shot unlearning update:
θUL = θOR − IEUL , (6) where IEUL is the influence ofEUL on the target model, i.e., the change on the model parameters by EUL. In general, IEUL is aK×d matrix, where K is the number of parameters in θOR (and both θRE and θUL), and d is the dimension of each parameter (i.e. embedding). This update can be interpreted from
the optimization perspective that the model forgets EUL by “reversing” the influence of EUL from the model. The challenge is how to quantify IEUL to achieve the unlearning objective (Eqn. (3)). Next, we discuss the details of how to compute IEUL .
Existing influence functions and their inapplicability. Influence functions (Koh & Liang, 2017) enable efficient approximation of the effect of some particular training points on a model’s prediction. The general idea of influence functions is the following: let θ and θ̂ be the model parameters before and after removing a data point z, the new parameters θ̂ ,z after z is removed can be computed as following:
θ̂ ,z = arg min θ
1
m ∑ zi 6=z L(θ; zi) + L(θ; z), (7)
wherem is the number of data points in the original dataset, and is a small constant. Intuitively, the influence function computes the parameters after removal of z by upweighting z on the parameters with some small .
It seemly sounds that the influence function (Eqn. (7)) can be applied to the edge unlearning setting directly by upweighting those nodes that are included in any edge inEUL. However, this is incorrect as removing one edge e(vi, vj) from the graph can affect not only the prediction of vi and vj but also those of neighboring nodes of vi and vj , due to the aggregation function of GNN models.
4.1 THEORETICAL CHARACTERIZATION OF EDGE INFLUENCE ON GNNS
In general, an `-layer GNN aggregates the information of the `-hop neighborhood of each node. Thus removing an edge e(vi, vj) will affect not only vi and vj but also all nodes in the `-hop neighborhood of vi and vj . To capture such aggregation effect in derivation of edge influence, first, we define the set of nodes (denoted as Ve) that will be affected by removing an edge e(vi, vj) as: Ve = N (vi) ∪N (vj) ∪ {vi, vj}, where N (v) is the set of nodes connected to v in ` hops. Then given a set of edges EUL ⊂ E to be removed, the set of nodes VEUL that will be affected by removing EUL is defined as follows:
VEUL = ⋃
e∈EUL
Ve. (8)
Next, we follow the data perturbation idea of influence functions (Koh & Liang, 2017), and compute the new parameters θ ,EUL after the removal of EUL as follows:
θ ,VEUL = arg minθ
1 |V | ∑ v∈V L(θ; v,E) + ∑ v∈VEUL L(θ; v,E\EUL)− ∑ v∈VEUL L(θ; v,E). (9)
Intuitively, Eqn. (9) approximates the effects that moving mass of perturbation on VEUL with E\EUL in place of E. Then we obtain the following theorem. Theorem 2. Given the parameters θOR obtained by AUL on a graph G, and the loss function L, assume that L is twice-differentiable and convex in θ, then the influence of a set of edges EUL is:
IEUL = −H−1θOR ( ∇θ ∑ v∈VEUL L(θOR; v,E\EUL)−∇θ ∑ v∈VEUL L(θOR; v,E) )
(10)
where HOR := ∇2 1|V | ∑ v∈V L(θOR, v, E) is the Hessian matrix of L with respect to θOR.
The proof of Theorem 2 can be found in Appendix A.1. According to Eqn (9), removing EUL is equivalent to upweighting = 1|V | mass of perturbation. Therefore, θUL = θ ,VEUL when = 1 |V | . Finally, we have a linear approximation of θUL:
θUL ≈ θOR + 1
|V | IEUL .
Dealing with non-convexity of GNNs. Theorem 2 assumes the loss function is convex. Given the non-convexity nature of GNN models, it is hard to reach the global minimum in practice. As a result, the Hessian matrix HθOR may have negative eigenvalues. To address this issue, we adapt the damping term based solution (Koh & Liang, 2017) to prevent HθOR from having negative eigenvalues by adding a damping term to the Hessian matrix, i.e., (HθOR + λI).
4.2 TIME AND MEMORY EFFICIENT INFLUENCE ESTIMATOR
Although by Theorem 2 estimating the edge influence amounts to solving a linear system, there are several practical and theoretical challenges. First, it can well be the case that the Hessian matrix HθOR is non-invertible. This is because our loss function is non-convex with respect to θ. As a consequence, the linear system may even not have a solution. Second, even storing a Hessian matrix in memory (either CPU or GPU) is expensive: in our experiments, we will show that Hessian matrices are huge, e.g. the Hessian matrix on the Physics dataset has size around 106 × 106 which would cost 60 GB memory. Lastly, even under the promise that the linear system is feasible, computing the inverse of such a huge size matrix is prohibitive.
Our second technical contribution thus is an algorithm that resolves all the challenges mentioned above. Claim 3. There is a computationally and memory efficient algorithm to solve the linear system of IEUL in Theorem 2.
The starting point of our algorithm is a novel perspective that solving the linear system (Eqn. (10)) can be thought of as finding a stationary point of the following quadratic function:
f(x) = arg min x
1 2 xTAx− bTx, (11)
withA = HθOR and b = ∇θ ∑ v∈VEUL L(θOR; v,E\EUL)−∇θ ∑ v∈VEUL
L(θOR; v,E). Note that even the function f(x) is non-convex, there is rich literature establishing convergence guarantee to stationary points using gradient-descent-type algorithms; see e.g. (Bertsekas, 1999).
In this paper, we will employ the conjugate gradient (CG) method which exhibits promising computational efficiency for minimizing quadratic functions (Pytlak, 2008). In fact, it was well-known that as long as the step size satisfies the Wolfe conditions (Wolfe, 1969; 1971) and the objective function is Lipschitz and bounded from below, the sequence of iterates produced by CG asymptotically converges to a stationary point of f(x), which corresponds to a solution IEUL that satisfies Eqn. (10). Note that these regularity conditions are satisfied as soon as the training data are bounded. Hence, we have the following convergence guarantee. Lemma 4 (Theorem 2.1 of (Pytlak, 2008)). The CG method generates a sequence of iterates {xt}t≥1 such that limt→+∞ f(xt) = 0. In addition, the per-iteration time complexity is O(|x|) where|x| denotes the dimension of x.
We note, however, that an appealing feature of Eqn. (10) is that we do not have to find a solution with exact zero gradient. This enables us to terminate CG early by monitoring the magnitude of the gradients. Our empirical study also shows that CG can get good approximation in a small number of iterations.
In addition, we propose a memory-efficient implementation of CG, which significantly reduces the memory cost. Lemma 5. The CG method can be implemented using O(|θ|) memory.
Proof. To see why the above lemma holds, recall that a key step of CG update is calculating the gradient of f(x) as
∇f(x) = HθORx− ( ∇θ ∑ v∈VEUL L(θOR; v,E\EUL)−∇θ ∑ v∈VEUL L(θOR; v,E) ) .
As HθOR ∈ R|θ|×|θ|, we can not explicitly compute HθOR . Instead, we utilize Hessian-vector product (Pearlmutter, 1994) to approximately calculate HθORx by
HθORx ≈ g(θOR + rx)− g(θOR)
r , (12) for some very small step size r > 0, where g(θ) := ∇θ ∑ v∈VEUL
L(θOR; v,E\EUL) − ∇θ ∑ v∈VEUL
L(θOR; v,E). Note that the memory cost of evaluating the function value of g(·) is O(|θ|). Hence, Lemma 5 follows.
Remark 6. Observe that a trivial implementation involves storing the Hessian matrix which consumes O(|θ|2) memory. Returning to our previous example on the Physics dataset, a trivial implementation consumes 64 GB memory, while ours only needs 8 GB memory.
Proof of Claim 3. Claim 3 follows from Lemma 4 and Lemma 5.
5 EXPERIMENTS
In this section, we empirically verify the efficiency and effectiveness of our unlearning method.
5.1 EXPERIMENTAL SETUP
All the experiments are executed on a GPU server with NVIDIA A100 (40G). All the algorithms are implemented in Python with PyTorch. We set the damping term λ = 0.01 for all experiments. The link to the code and datasets will be available in the camera-ready version.
Datasets. We use three well-known datasets, namely Cora (Sen et al., 2008), Citeseer (Yang et al., 2016), and CS (Shchur et al., 2018), that are popularly used for performance evaluation of GNNs (Shchur et al., 2018; Zhang et al., 2019). The statistical information of these datasets can be found in Appendix A.2.
GNN models. We consider four representative GNN models, namely GCN (Kipf & Welling, 2017), GAT (Veličković et al., 2018), GraphSAGE (Hamilton et al., 2017), and GIN (Xu et al., 2019). We configure the GNNs with one hidden layer and a softmax output layer. All GNN models are trained for 1,000 epochs with an early-stopping condition when the validation loss is not decreasing for 20 epochs. We randomly split each graph into a training set (60%), a validation set (20%), and a test set (20%). As we mainly consider the impact of structure change on GNN models, we randomly initialize the values of node features such that they follow the Gaussian distribution to eliminate the possible dominant impact of node features on model performance. More details of the model setup can be found in Appendix A.2. We also measure the model performance with original node features. The results can be found in Appendix A.5.
Picking edges for removal. We randomly pick k ={100, 200, 400, 600, 800, 1,000}) edges from Cora and CiteSeer datasets, and k={1,000, 2,000, 4,000, 6,000, 8,000, 10,000}) edges from CS dataset for removal. For each setting, we randomly sample ten batches of edges, with each batch containing k edges. We report the average of model performance (model accuracy, unlearning efficacy, etc.) of the ten batches.
Metrics. We evaluate the performance of EraEdge in terms of efficiency, efficacy, and model accuracy: (1) Unlearning efficiency: we measure the running time of EraEdge and retraining time for a given set of edges; (2) Target model accuracy: we measure accuracy of node classification, i.e., the percentage of nodes that are correctly classified by the model, as the accuracy of the target model. Higher accuracy indicates better accuracy retained by the unlearned model; (3) Unlearning efficacy: we measure the distance between the output space of both retrained and unlearned models as the Jensen–Shannon divergence (JSD) between the posterior distributions output by these two models. Smaller JSD indicates a higher similarity between the two models in terms of their outputs.
Baselines. We consider both baselines of exact and approximate GNN unlearning for comparison with EraEdge. For exact GNN unlearning, we consider GraphEraser (Bourtoule et al., 2021) as the baseline. GraphEraser has two partitioning strategies denoted as balanced LPA (BLPA) and balanced embedding k-means (BEKM), We consider both BLPA and BEKM as the baseline methods. We use the same setting of number of shards as in (Chen et al., 2022) for both BLPA and BEKM. For approximate GNN unlearning, we consider (Chien et al., 2022) as the baseline.
5.2 PERFORMANCE OF ERAEDGE We evaluate the performance of EraEdge on four representative GNN models and three graph datasets, and compare the performance of the unlearned model with both the retrained model and two baselines in terms of model accuracy, unlearning efficiency, and unlearning efficacy.
Model accuracy. We report the results of GNN model accuracy in Table 2 (Accuracy column) for GCN+Cora and GraphSAGE+CS settings. The results for other settings can be found in Appendix A.3. We have the following observations. First, the model accuracy obtained by EraEdge stays very close to that of the retrained model, regardless of the number of the removed edges. The difference in model accuracy between retrained and unlearned models remains negligible (in range of [0.48%, 0.52%] and [0.01%, 0.2%] for the two settings respectively). Second, EraEdge witnesses significantly higher model accuracy compared to the two baseline approaches, especially for the GCN+Cora setting. For example, both BEKM and BLPA only can deliver the model accuracy as around 48% when removing 200 edges under the GCN+Cora setting. This shows that unlearning through graph partitioning can bring significant loss of target model accuracy. Meanwhile EraEdge demonstrates that the model accuracy can be as high as∼79% (65% improvement). Unlearning efficiency. We report the time performance results of EraEdge and retraining in Table 2 (Running time. column) for GCN+Cora and GraphSAGE+CS settings. The results of other settings can be found in Appendix A.3. We measure the running time of the two baselines as the average training time per shard, as all shards are trained in parallel. The most important observation is that EraEdge is significantly faster than retraining. For example, it speeds up the training time by 5× under GCN+Cora setting when removing 1,000 edges, and 77× under GraphSAGE+CS setting when removing 2,000 edges. Furthermore, EraEdge is much faster than the two baselines especially when training large graphs. For example, EraEdge is 5.8× faster than BLPA and 3.5× faster than BEKM under the GraphSAGE+CS setting when 2,000 edges were removed.
Unlearning efficacy. Figure 2 plots the results of unlearning efficacy which is measured as the JSD between the posterior probability output by both retraining and unlearning models. We observe that JSD remains insignificant (at most 0.02) in all the settings. Furthermore, JSD stays relatively stable when the number of removed edges increase. This demonstrates the efficacy of EraEdge - it remains close to the retraining model even when a large number of edges is removed.
Main takeaway. While demonstrating similar accuracy as retraining, EraEdge is significantly faster than retraining, where the speedup gain becomes more outstanding when more edges are
100 0.5913 0.5446 0.5297 0.6179 0.5615 0.5523
200 0.6014 0.5486 0.5471 0.5946 0.5659 0.5498
400 0.5978 0.5383 0.5378 0.5934 0.5400 0.5368
600 0.5993 0.5360 0.5383 0.6055 0.5471 0.5475
removed. Furthermore, EraEdge outperforms the baseline approaches considerably in both model accuracy and time performance.
5.3 TESTING OF EDGE FORGETTING THROUGH MEMBERSHIP INFERENCE ATTACKS
To empirically evaluate the extent to which the unlearned model has forgotten the removed edges, we launch a black-box edge membership inference attack (MIA) (Wu et al., 2022)1 that predicts whether particular edges exist in the training graph. We measure the attack performance as AUC of MIA. Intuitively, an AUC that is close to 50% indicates that MIA’s belief of edge existence is close to random guess.
Table 3 reports the attack performance of MIA’s inference of the removed edges EUL against both the original model and the retrained/unlearned models on Cora dataset. First, MIA is effective to predict the existence of EUL in the original graph, as the AUC of MIA against the original model is much higher than 0.5. Second, the ability of MIA inferring EUL from either the retrained or the unlearned model degrades, as the AUC of MIA on both retrained and unlearned models is noticeably reduced. Indeed, the AUC of MIA for both retrained and unlearned models remain close to each other. This demonstrates that the extent to which EraEdge forgets EUL is similar to that of the retrained model.
5.4 COMPARISON WITH CERTIFIED GRAPH UNLEARNING
In this part of the experiments, we compare the performance of EraEdge with certified graph unlearning (CGU) (Chien et al., 2022). The key idea of the certified unlearning method is to add noise drawn from the Gaussian distribution to the loss function. We use µ = 0 and σ = 1 as the mean and standard deviation of the Gaussian distribution. We compare CGU and EraEdge in terms of: (1) target model accuracy, (2) unlearning efficacy (measured as the JSD between the probability output
1We use the implementation of LinkTeller available at: https://github.com/AI-secure/LinkTeller.
100 0.5913 0.5446 0.5329 0.5297
200 0.6014 0.5486 0.5485 0.5471
400 0.5978 0.5383 0.5343 0.5378
600 0.5993 0.5360 0.5434 0.5383
of retraining and unlearning models), and (3) privacy vulnerability of the removed edges against the membership inference attack.
Figure 3 (a) reports the target model accuracy by CGU and EraEdge. As we can see, while EraEdge enjoys similar target model accuracy as the retrained model, CGU suffers from significant loss of model accuracy due to added noise, where the model accuracy is 50% worse than that of both EraEdge and retraining.
Figure 3 (b) reports unlearning efficacy by CGU and EraEdge. The results demonstrate that the model output by CGU is much farther away from that of the retrained model than EraEdge. This is consistent with the low accuracy results in Figure 3 (b).
Table 4 shows the ability of forgetting the removed edges EUL by both CGU and EraEdge, where the edge forgetting ability is measured as the accuracy (AUC) of the membership inference attack that predicts EUL in the training graph. We use the same membership inference attack (Wu et al., 2022) as in Section 5.3. The reported results are calculated as the average AUC of ten MIA trials. We observe that CGU and EraEdge has comparable edge forgetting ability, where MIA performance against both models is close. This demonstrate empirically that EraEdge provides similar privacy risks as CGU. As it has been shown above that the target model accuracy by EraEdge outperforms that of CGU significantly, we believe that EraEdge better addresses the trade-off between unlearning efficacy, privacy, and model accuracy.
6 RELATED WORK
Machine unlearning. Machine unlearning aims to remove some specific information from a pretrained ML model. Several attempts have been made to make unlearning more efficient than retraining from scratch. An earlier study converts ML algorithms to statistical query (SQ) learning, so that
unlearning processes only need to retrain the summation of SQ learning (Cao & Yang, 2015). The concept of SISA (sharded, isolated, sliced, and aggregated) approach is proposed recently (Bourtoule et al., 2021) where a set of constituent models, trained on disjoint data shards, are aggregated to form an ensemble model. Given an unlearning request, only the affected constituent model is retrained. Alternative machine unlearning solutions directly modify the model’s parameters to unlearn in a small number of updates (Guo et al., 2020; Neel et al., 2021; Sekhari et al., 2021). Recent studies have focused on various convex ML models including random forest (Brophy & Lowd, 2021; Schelter et al., 2021), k-means clustering (Ginart et al., 2019), and Bayesian inference models (Fu et al., 2021).
Machine unlearning in deep neural networks. Early work on deep machine unlearning focuses on removing the information from the network weights by imposing a condition of SGD based optimization during training (Golatkar et al., 2020a). The subsequent work (Golatkar et al., 2020b) estimates the network weights for the unlearned mode. However, all these methods suffer from high computational costs and constraints on the training process (Tarun et al., 2021). The amnesiac unlearning approach (Graves et al., 2021) focuses on Convolutional Neural Networks. It cancels parameter updates from only the batches containing the removed data. However, it assumes that the data to be removed is known before the training of the original model, which does not hold in our setting where edge removal requests are unknown and unpredictable. There also has been recent empirical and theoretical work in developing deep network unlearning in the application domain of computer vision (Du et al., 2019; Nguyen et al., 2020). GraphEraser (Chen et al., 2022) is one of the few works that consider unlearning in GNNs. It follows the SISA approach (Bourtoule et al., 2021) and splits graph into disjoint partitions (shards). Upon receiving an unlearning request, only the model on the affected shards is retrained. However, as splitting the training graph into disjoint partitions will damage the original graph structure, GraphEraser could downgrade the accuracy of the unlearned model significantly, especially when a large number of edges is to be removed. This has been demonstrated in our experiments.
Certified machine unlearning. Certified removal (Guo et al., 2020) defines approximate unlearning with a privacy guarantee (indistinguishability of unlearned models with retrained models), where indistinguishability is defined in a similar manner as differential privacy (Dwork et al., 2006). Certified removal can be realized by adding noise sampled from either Gaussian distribution or Laplace distribution on the weights (Golatkar et al., 2020a; Wu et al., 2020; Neel et al., 2021; Golatkar et al., 2021; Sekhari et al., 2021), or adding perturbation on the loss function (Guo et al., 2020). (Chien et al., 2022) provides the first certified GNN unlearning solution. It only considers simple graph convolutions (SGC) and their generalized PageRank (GPR) extensions. To achieve a theoretical guarantee for certified removal, it adds noise to the loss function. However, as shown in our empirical evaluation (Section 5), the certified unlearning leads to significant loss of target model due to the added noise.
Explanations of deep ML models by influence functions. One of the challenges of deep ML models is its non-transparency that hinders understanding of the prediction results. Recent works (Koh & Liang, 2017) adapt the concept of influence function - a classic technique from robust statistics — to formalize the impact of a training point on a prediction. Broadly speaking, the influence function attempts to estimate the change in the model’s predictions if a particular training point is removed. Very recently, the concept of influence function has been extended to GNNs. For instance, influence functions are designed for GNNs to measure feature-label influence and label influence (Wang et al., 2019). Node-pair influence, i.e., the change in the prediction for node u if the features of the other node v are reweighted, is also studied (Wu et al., 2022). Unlike these works, we estimate the edge influence, i.e., the effect of removing particular edges on GNN models.
7 CONCLUSION
In this work, we study the problem of edge unlearning that aims to remove a set of target edges from GNNs. We design an approximate unlearning algorithm named EraEdge which enables fast yet effective edge unlearning in GNNs. An extensive set of experiments on four representative GNN models and three benchmark graph datasets demonstrates that EraEdge can achieve significant speedup gains over retraining without sacrificing the model accuracy too much.
There are several research directions for the future work. First, while EraEdge only considers edge unlearning, it can be easily extended to handle node unlearning, as removing a node v from a graph
is equivalent to removing all the edges that connect with v in the graph. We will investigate the feasibility and performance of node unlearning through EraEdge, and compare the performance with the existing node unlearning methods (Chien et al., 2022). Second, an important metric of unlearning performance is unlearning capacity, i.e., the maximum number of edges that can be deleted while still ensuring good model accuracy. We will investigate how EraEdge can be tuned to meet the capacity requirement. Third, we will extend the study to a relevant topic, continual learning (CL), which studies how to learn from an infinite stream of data, so that the acquired knowledge can be used for future learning (Chen & Liu, 2018). An interesting question is how to support both continual learning (Chen & Liu, 2018) and private unlearning (CLPU) (Liu et al., 2022), i.e., the model learns and remembers permanently the data samples at large, and forgets specific samples completely and privately. We will explore how to extend EraEdge to support CLPU.
A APPENDIX
A.1 PROOF OF THEOREM 2
Proof. For simplicity, we first define R(θ, V,E) = ∑ v∈V L(θ, v, E). (13)
Then, we formulate a GNN learning process as
θOR = arg min θ
1
|V | R(θ, V,E). (14)
Since removing edges can be considered as perturbing the input, we introduce Eqn 9,
θ = arg min θ
1 |V | ∑ v∈V L(θ; v,E) + ∑
v∈VEUL
L(θ; v,E\EUL)− ∑
v∈VEUL
L(θ; v,E)
= arg min θ
1
|V | R(θ, V,E) + R(θ, VEUL , E\EUL)− R(θ, VEUL , E). (15)
We note a necessary condition is that the gradient of Eqn 15 at θ is zero. Then, we have
0 = 1
|V | ∇θR(θ , V, E) + ∇θR(θ , VEUL , E\EUL)− ∇θR(θ , VEUL , E). (16)
Next, we apply Taylor series at θOR and we get
0 ≈ 1 |V | ∇θR(θOR, V, E) + ∇θR(θOR, VEUL , E\EUL)− ∇θR(θOR, VEUL , E)
+ [ 1 |V | ∇2θR(θOR, V, E) + ∇2θR(θOR, VEUL , E\EUL)− ∇2θR(θOR, VEUL , E) ] (θ − θOR),
(17)
where we have dropped o(θOR − θ ) for approximation. Then Eqn (17) is a linear system of EUL, the influence of EUL. Since θOR is the minimum of Eqn (14), we have 1|V |∇R(θOR, V, E) = 0. As is a small value, we drop the two o( ) terms and have the following:
1
|V | ∇2θR(θOR, V, E)(θ −θOR)+
( ∇θR(θOR, VEUL , E\EUL)−∇θR(θOR, VEUL , E) ) ≈ 0. (18)
Suppose Eqn (14) is convex, then
θ −θOR ≈ − 1
|V | ∇2θR(θOR, V, E)−1
( ∇θR(θOR, VEUL , E\EUL)−∇θR(θOR, VEUL , E) ) (19)
Denote
IEUL := d(θ − θOR)
d
∣∣∣ =0 = −H−1θOR ( ∇θR(θOR, VEUL , E\EUL)−∇θR(θOR, VEUL , E) ) (20)
where HOR := ∇2 1|V | ∑ v∈V L(θOR, v, E).
A.2 ADDITIONAL DETAILS OF EXPERIMENTAL SETUP
Description of datasets. Table 5 summarizes the statistical information of the three graph datasets (Cora, Citeseer, and CS) we used in the experiments. Cora and Citeseer datasets are citation graphs, while CS dataset is a co-author graph.
Additional details of model setup. To ensure fair comparison between the retrained and unlearned models, we use the same model size (i.e., same number of layers and number of neurons) for both retraining and unlearned models. All GNN models are trained with a learning rate of 0.001. We train the models by 1,000 epochs, with the early-stopping condition as that the validation loss does not decrease for 20 epochs.
A.3 ADDITIONAL PERFORMANCE RESULTS
Model efficiency. Figure 4 presents the model efficiency results on the three datasets. We observe that EraEdge is significantly faster than retraining. For example, EraEdge outperforms by 9.95×, 5.41×, 69.36×, and 3.12× on CS dataset respectively over retraining (Figure 4 (c), (f), (i), and (l)). Model accuracy. Figure 5 presents the results of model accuracy for all settings. First, the accuracy of the target model by EraEdge is very close to that by the retrained model. In particular, the average difference in model accuracy between retrained and unlearned models are in the range of [0.11%, 0.68%], [0.02%, 0.74%], [0.06%, 0.65%] and [0.07%, 1.00%] for GCN, GAT, GraphSAGE, and GIN on Cora, [0.01%, 0.71%], [0.05%, 0.44%, [0.05%, 0.65%], and [0.02%, 1.25%] on CiteSeer, and [0.02%, 0.22%], [0.01%, 0.20%, [0.05%, 0.23%], and [0.01%, 0.22%] on CS, respectively. Furthermore, the model accuracy of the unlearned model stays close to that of the retrained model, regardless of the number of removed edges. This demonstrates that EraEdge can handle the removal of a large number of edges.
A.4 SEQUENTIAL UNLEARNING (NEW)
So far we only considered deleting of one batch of edges. In practice, there can be multiple batch deletion requests to forget the edges in a sequential fashion. Next, we focus on the scenario where multiple edge batches are removed sequentially. Specifically, we divide the to-be-removed EUL into k > 1 disjoint batches {Bi}ki=1, with each batch consisting of the same number of edges. For each batch Bi (1 ≤ i ≤ k − 1), we consider the target model obtained from retraining/unlearning of the previous batch Bi−1 as the original model θOR, and update θOR by removing Bi (either by retraining or unlearning). We evaluate the target model accuracy under sequential unlearning and compare it with that under one-batch unlearning.
We consider k = 4, and reports the target model accuracy for deletingEUL in one batch and deleting EUL in k = 4 batches in Table 6. We also report the target model accuracy of the retrained and unlearned models at each batch. We observe that, first, the accuracy of the unlearned model remains close to the retrained model at each batch during sequential removals. Second, the performance of the unlearned model after removing k batches stays close to that of the model after single-batch unlearning. These results demonstrate that EraEdge can handle sequential deletion of multiple batches of edges.
A.5 UNLEARNING WITH NODE FEATURES (NEW)
In Section 5 we mainly considered the node features that are randomly initialized to eliminate the possible dominant impact of node features on model performance. Next, we evaluate the perfor-
Retrain EraEdge
GCN
100 200 400 600 800 1000 Number of unlearned edges
0
1
2
3
4
5
Un le
ar ni
ng ti
m e
(s ec
on ds
)
(a) Cora
100 200 400 600 800 1000 Number of unlearned edges
0.0
0.5
1.0
1.5
2.0
2.5
3.0
Un le
ar ni
ng ti
m e
(s ec
on ds
)
(b) CiteSeer
1000 2000 4000 6000 8000 10000 Number of unlearned edges
0
5
10
15
20
Un le
ar ni
ng ti
m e
(s ec
on ds
)
(c) CS
GAT
100 200 400 600 800 1000 Number of unlearned edges
0
1
2
3
4
5
6
7
Un le
ar ni
ng ti
m e
(s ec
on ds
)
(d) Cora
100 200 400 600 800 1000 Number of unlearned edges
0
1
2
3
4
5
Un le
ar ni
ng ti
m e
(s ec
on ds
)
(e) CiteSeer
1000 2000 4000 6000 8000 10000 Number of unlearned edges
0
5
10
15
20
Un le
ar ni
ng ti
m e
(s ec
on ds
)
(f) CS
SAGE
100 200 400 600 800 1000 Number of unlearned edges
0
2
4
6
8
10
12
Un le
ar ni
ng ti
m e
(s ec
on ds
)
(g) Cora
100 200 400 600 800 1000 Number of unlearned edges
0
2
4
6
8
Un le
ar ni
ng ti
m e
(s ec
on ds
)
(h) CiteSeer
1000 2000 4000 6000 8000 10000 Number of unlearned edges
0
10
20
30
40
50
60
Un le
ar ni
ng ti
m e
(s ec
on ds
)
(i) CS
GIN
Retrain EraEdge
GCN
0 100 200 400 600 800 1000 Number of unlearned edges
0.75
0.76
0.77
0.78
0.79
M od
el a
cc ur
ac y
(a) Cora
0 100 200 400 600 800 1000 Number of unlearned edges
0.54
0.55
0.56
0.57
0.58
0.59
0.60
M od
el a
cc ur
ac y
(b) CiteSeer
0 1000 2000 4000 6000 8000 10000 Number of unlearned edges
0.8550
0.8575
0.8600
0.8625
0.8650
0.8675
0.8700
0.8725
M od
el a
cc ur
ac y
(c) CS
GAT
0 100 200 400 600 800 1000 Number of unlearned edges
0.745
0.750
0.755
0.760
0.765
0.770
M od
el a
cc ur
ac y
(d) Cora
0 100 200 400 600 800 1000 Number of unlearned edges
0.56
0.57
0.58
0.59
0.60
0.61
M od
el a
cc ur
ac y
(e) CiteSeer
0 1000 2000 4000 6000 8000 10000 Number of unlearned edges
0.8475
0.8500
0.8525
0.8550
0.8575
0.8600
0.8625
0.8650
M od
el a
cc ur
ac y
(f) CS
GraphSAGE
0 100 200 400 600 800 1000 Number of unlearned edges
0.76
0.77
0.78
0.79
M od
el a
cc ur
ac y
(g) Cora
0 100 200 400 600 800 1000 Number of unlearned edges
0.55
0.56
0.57
0.58
0.59
0.60
0.61
M od
el a
cc ur
ac y
(h) CiteSeer
0 1000 2000 4000 6000 8000 10000 Number of unlearned edges
0.8550 0.8575 0.8600 0.8625 0.8650 0.8675 0.8700 0.8725 0.8750 M od el a cc ur ac y
(i) CS
GIN
mance of EraEdge that uses the original node features. Figure 6 reports the target model accuracy of the unlearned model that is trained with or without the node features. We have the following main observations. First, the target model accuracy improves significantly by considering the original node features. This shows that the node features have dominant importance on the target model performance in this setting. However, the target model accuracy of the unlearned model still stays close to that of the retrained model. In other words, EraEdge still can make GNNs forget the edges effectively even when node features have dominant importance over the graph structure on model performance. | 1. What is the main contribution of the paper regarding graph neural networks?
2. What are the strengths and weaknesses of the proposed approach compared to prior works, particularly in terms of its technical idea and problem studied?
3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
4. Are there any concerns or questions regarding the paper, such as its claim of being the only work that considers unlearning in GNN, its narrow setting compared to other works, or its experimental methods? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This work studies a notion of unlearning for graph neural networks. Basically, given a set of edges removed from the graph, it tries to address how to fast adjust the model parameters to make the model behave like the model retrained on the graph with edge removal while without retraining. The technical idea is to analyze the influence of the edge removal on the model parameters given the convex and differentiable assumption of the objective. Experiments show some superiority of the proposed method.
Strengths And Weaknesses
Strengthes:
Graph unlearning is a relatively novel concept. The problem studies here is interesting. Also, the analysis and argument sound reasonable and solid.
The paper is written very well. I appreciate the logic flow. The motivation and the exposition of the approach is clear.
Weaknesses:
Here is my biggest concern. Although I overall think the technique in this paper is reasonable and the studied problem is interesting. Unfortunately, recently, I have read a relevant paper on graph unlearning published four months ago [1], which I think has studies a far more extensive setting on graph unlearning than this work. Although that work is just an arxiv paper, I cannot view it as a concurrent work because the content studied in [1], in my opinion, is broader and provides more insights than the setting studied in this work. I know it is tough for the authors but I cannot ignore this. Therefore, this work's statement saying this is the only work that considers unlearning in GNN, which is an over-claiming.
Moreover, [1] studies both node and edge unlearning, while this work only studies edge unlearning. In my opinion, node unlearning is more crucial because a user (typically corresponding to a node), if not wanting to disclose her data, will ask to remove this node from the graph. Moreover, I think [1] also tells more data insights due to their analytic bounds such as the dependence of unlearning performance on node degrees, etc.
I can see some adopted detailed techniques are different, such as [1] using SGC (linear model) while this paper using convex assumptions. My feeling is that if this work may discuss both edge/node unlearning and also provides further insights on how graph structure affects the unlearning performance, I would appreciate the technical difference in this paper and may support an acceptance. However, the current setting of this paper is still kind of narrow compared to [1].
This work writes well and is easy to follow. However, I feel there is a little bit misleading in the introduction. For example, after reading the introduction, I thought the paper would like to touch non-convex settings in theory (the fourth paragraph of intro). However, the later analysis is based on convex assumption. Moreover, in intro, the authors say "empirical studies on tradeoff between unlearning efficiency, accuracy, unlearning efficacy". However, in the experiments, I can only see the list of these results without tradeoff. My understanding of the tradeoff would be about, e.g., high accuracy/efficacy requires less efficiency, and the proposed method has a work to balance these aspects. Unfortunately, I do not think the proposed approach has such flexibility.
Moreover, I am not clear how the averaged JSD is computed. I can think of there are multiple ways to define averaged JSD. Do you mean averaging over testing samples? or averaging over classes? I think a math equation is needed to show this.
Fig.4 the first row has wrong subtitles I think.
Regarding experiments, how do you remove edges, randomly and how many times, or? Also, I do not see how to tune the model and how to make sure the comparison between model retraining and the proposed method fair, e.g., same model size? how about learning rate?
Since efficiency is one topic of interest in this work, the used graphs are in general too small.
[1] Certified Graph Unlearning, Chien et al., 2022.
Clarity, Quality, Novelty And Reproducibility
I think I have made these points clear in the above response.
Clarity: Generally easy to follow while some explanations on experiment settings are missed
Quality: Okay, but not extensive enough given previous works
Novelty: Topic is interesting while there are a few contributions being over-claimed.
Reproducibility: Good while I did not see how to tune hyperparameters and some detailed experiment settings are missed. |
ICLR | Title
Fast Yet Effective Graph Unlearning through Influence Analysis
Abstract
Recent evolving data privacy policies and regulations have led to increasing interest in the problem of removing information from a machine learning model. In this paper, we consider Graph Neural Networks (GNNs) as the target model, and study the problem of edge unlearning in GNNs, i.e., learning a new GNN model as if a specified set of edges never existed in the training graph. Despite its practical importance, the problem remains elusive due to the non-convexity nature of GNNs and the large scale of the input graph. Our main technical contribution is three-fold: 1) we cast the problem of fast edge unlearning as estimating the influence of the edges to be removed and eliminating the estimated influence from the original model in one-shot; 2) we design a computationally and memory efficient algorithm named EraEdge for edge influence estimation and unlearning; 3) under standard regularity conditions, we prove that EraEdge converges to the desired model. A comprehensive set of experiments on four prominent GNN models and three benchmark graph datasets demonstrate that EraEdge achieves significant speedup gains over retraining from scratch without sacrificing the model accuracy too much. The speedup is even more outstanding on large graphs. Furthermore, EraEdge witnesses significantly higher model accuracy than the existing GNN unlearning approaches.
1 INTRODUCTION
Recent legislation such as the General Data Protection Regulation (GDPR) (Regulation, 2018), the California Consumer Privacy Act (CCPA) (Pardau, 2018), and the Personal Information Protection and Electronic Documents Act (PIPEDA) (Parliament, 2000) requires companies to remove private user data upon request. This has prompted the discussion of “right to be forgotten” (Kwak et al., 2017), which entitles users to get more control over their data by deleting it from learned models. In case a company has already used the data collected from users to train their machine learning (ML) models, these models need to be manipulated accordingly to reflect data deletion requests.
In this paper, we consider Graph Neural Networks (GNNs) that receive frequent edge removal requests as our target ML model. For example, consider a social network graph collected from an online social network platform that witnesses frequent insertion/deletion of users (nodes) and/or change of social relations between users (edges). Some of these structural changes can be accompanied with users’ withdrawal requests of their data. In this paper, we only consider the requests of removing social relations (edges). Then the owner of the platform is obligated by the laws to remove the effect of the requested edges, so that the GNN models trained on the graph do not “remember” their corresponding social interactions.
In general, a naive solution to deleting user data from a trained ML model is to retrain the model on the training data which excludes the samples to be removed. However, retraining a model from scratch can be prohibitively expensive, especially for complex ML models and large training data. To address this issue, numerous efforts (Mahadevan & Mathioudakis, 2021; Brophy & Lowd, 2021; Cauwenberghs & Poggio, 2000; Cao & Yang, 2015) have been spent on designing efficient unlearning methods that can remove the effect of some particular data samples without model retraining. One of the main challenges is how to estimate the effects of a given training sample on model parameters (Golatkar et al., 2021), which has led to research focusing on simpler convex learning problem such a linear/logistic regression (Mahadevan & Mathioudakis, 2021), random forests (Brophy & Lowd, 2021), support vector machines (Cauwenberghs & Poggio, 2000) and k-means clustering (Ginart et al., 2019), for which a theoretical analysis was established. Although there have
been some works on unlearning in deep neural networks (Golatkar et al., 2020a;b; 2021; Guo et al., 2020), very few works (Chen et al., 2022; Chien et al., 2022) have investigated efficient unlearning in GNNs. These works can be distinguished into two categories: exact and approximate GNN unlearning. GraphEraser (Chen et al., 2022) is an exact unlearning method that retrains the GNN model on the graph that excludes the to-be-removed edges in an efficient way. It follows the basic idea of Sharded, Isolated, Sliced, and Aggregated (SISA) method (Bourtoule et al., 2021) and splits the training graph into several disjoint shards and train each shard model separately. Upon receiving an unlearning request, the model provider retrains only the affected shard model. Despite its efficiency, partitioning training data into disjoint shards severely damages the graph structure and thus incurs significant loss of target model accuracy (will be shown in our empirical evaluation). On the other hand, approximate GNN unlearning returns a sanitized GNN model which is statistically indistinguishable from the retrained model. Certified graph unlearning (Chien et al., 2022) can provide a theoretical privacy guarantee of the approximate GNN unlearning. However, it only considers some simplified GNN architectures such as simple graph convolutions (SGC) and their generalized PageRank (GPR) extensions. We aim to design the efficient approximate unlearning solutions that are model-agnostic, i.e., without making any assumption of the nature and complexity of the model.
In this paper, we design an efficient edge unlearning algorithm named EraEdge which directly modifies the parameters of the pre-trained model in one shot to remove the influence of the requested edges from the model. By adapting the idea of treating removal of data points as upweighting these data points (Koh & Liang, 2017), we compute the influence of the requested edges on the model as the change in model parameters due to upweighting these edges. However, due to the aggregation function of GNN models, it is non-trivial to estimate the change on GNN parameters as removing an edge e(vi, vj) could affect not only the neighbors of vi and vj but also on multi-hops. Thus we design a new influence derivation method that takes the aggregation effect of GNN models into consideration when estimating the change in parameters. We address several theoretical and practical challenges of influence derivation due to the non-convexity nature of GNNs.
To demonstrate the efficiency and effectiveness of EraEdge, we systematically represent the empirical trade-off space between unlearning efficiency (i.e., the time performance of unlearning, model accuracy (i.e., the quality of the unlearned model), and unlearning efficacy (i.e., the extent to which the unlearned model has forgotten the removed edges). Our results show that, first, while achieving similar model accuracy and unlearning efficacy as the retrained model, EraEdge is significantly faster than retraining. For example, it speeds up the training time by 5.03× for GCN model on Cora dataset. The speedup is even more outstanding on larger graphs; it can be two orders of magnitude on CS graph which contains around 160K edges. Second, EraEdge outperforms GraphEraser (Chen et al., 2022) considerably in model accuracy. For example, EraEdge witnesses an increase of 50% in model accuracy on Cora dataset compared to GraphEraser. Furthermore, EraEdge is much faster than GraphEraser especially on large graphs. For instance, EraEdge is 5.8× faster than GraphEraser on CS dataset. Additionally, EraEdge outperforms certified graph unlearning (CGU) (Chien et al., 2022) significantly in terms of target model accuracy and unlearning efficacy, while it demonstrates comparable edge forgetting ability as CGU.
In summary, we made the following four main contributions: 1) We cast the problem of edge unlearning as estimating the influence of a set of edges on GNNs while taking the aggregation effects of GNN models into consideration; 2) We design EraEdge, a computationally and memory efficient algorithm that applies a one-shot update to the original model by removing the estimated influence of the removed edges from the model; 3) We address several theoretical and practical challenges of deriving edge influence, and prove that EraEdge converges to the desired model under standard regularity conditions; 4) We perform an extensive set of experiments on four prominent GNN models and three benchmark graph datasets, and demonstrate the efficiency and effectiveness of EraEdge.
2 GRAPH NEURAL NETWORK
Given a graph G(V,E) that consists of a set of nodes V and their edges E, the goal of a Graph Neural Network (GNN) model is to learn a representation vector ~h (embedding) for each node v in G that can be used in downstream tasks (e.g., node classification, link prediction).
A GNN model updates the node embeddings through aggregating its neighbors’ representations. The embedding corresponding to each node vi ∈ V at layer l is updated according to vi’s graph
neighborhood (typically 1-hop neighborhood). This update operation can be expressed as follows:
H(l+1) = σ(AGGREGATE(A,H(l), θ(l))), (1)
where σ is an activation function, A is the ajacency matrix of the given graph G, and θ(l) denotes the trainable parameters as layer l. The initial embeddings at ` = 0 are set to the input features for all the nodes, i.e., H(0) = X .
Different GNN models use different AGGREGATE functions. In this paper, we consider four representative GNN models, namely Graph Convolutional Networks (GCN) (Kipf & Welling, 2017), GraphSAGE (Hamilton et al., 2018), graph attention networks (GAT) (Veličković et al., 2018), and Graph Isomorphism Network (GIN) (Xu et al., 2019). These models differ on their AGGREGATE functions. We ignore the details of their AGGREGATE functions as our unlearning methods are model agnostic, and thus are independent from these functions.
After K iterations of message passing, a Readout function pools the node embeddings at the last layer and produce the final prediction results. The Readout function varies by the learning tasks. In this paper, we consider node classification as the learning task and the Readout function is a softmax function.
Ŷ = softmax(H(K)θ(K)). (2)
The final output of the target model for node v is a vector of probabilities, each corresponding to the predicted probability (or posterior) that v is assigned to a class. We consider cross entropy loss (Cox, 1958) which is the de-facto choice for classification tasks. In the following sections, we use L(θ; v,E) to denote the loss on node v for simplicity because only edges are directly manipulated.
3 FORMULATION OF EDGE UNLEARNING PROBLEM
Despite that GNNs are widely applicable to many fields, there are very few studies (Chen et al., 2022; Chien et al., 2022) on graph unlearning so far. In this section, we will formulate the definition of the edge unlearning problem. Table 1 lists the notations we use in the paper. In this paper, we only consider edge unlearning. We will discuss how to extend edge unlearning to handle node unlearning in Section 7.
Let G be the set of all graphs. In this paper, we only consider undirected graphs. Let Θ be the parameter space of the GNN models. A learning algorithm AL is a function that maps an instance G(V,E) ∈ G to a parameter θ ∈ Θ. Let θOR be the parameters of AL trained on G. Any user can submit an edge unlearning request to remove specific edges from G. In practice, unlearning requests are often submitted sequentially. For efficiency, we assume these requests are processed in a batch. As the response to these requests,AL has to erase the impacts of these edges and produce an unlearned model. A straightforward approach is to retrain the model on G(V,E\EUL) from scratch and obtain the model parameters θRE. However, due to the high computational cost of retraining, an alternative solution is to apply a unlearning process AUL that takes EUL and θOR as the input, and outputs the unlearned model.
The retrained and unlearned models should be sufficiently close and ideally identical. There are two types of notations in the literature that quantify the closeness of the retrained and unlearning models: (1) both retraining and unlearning models are indistinguishable in the parameter space, i.e., distributions of model parameters of both retraining and unlearning models are sufficiently
close, where the distance in two distributions can be measured by `2 distance (Wu et al., 2020) and KL divergence (Golatkar et al., 2020b); (2) both models are indistinguishable in the output space, i.e., distributions of the learning outputs by both models are sufficiently close, where the distance between two output distributions can be measured by either test accuracy (Thudi et al., 2021) or the privacy leakage of membership inference attack launched on model outputs (Graves et al., 2021; Baumhauer et al., 2020). We argue that indistinguishably of the parameter space is not suitable for GNNs, due to their non-convex loss functions (Tarun et al., 2021), as small changes of the training data can cause large changes in GNN parameters. Therefore, in this paper, we consider the indistinguishability of the output space between retrained and unlearned models as our unlearning notion. Formally, we define the edge unlearning problem as follows: Definition 1 (Edge Unlearning Problem). Given a graph G(V,E), a set of edges EUL ⊂ E that are requested to be removed fromG, a graph learning algorithmAL and its readout function f , then an edge unlearning algorithm AUL should satisfy the following:
P (f(θRE)|GUL) ≈ P (f(θUL)|GUL), (3) where GUL = G(V,E\EUL), and P (f(θ)|G) denotes the distribution of possible outputs of the model (with parameters θ) on G.
The readout function f varies for different learning tasks. In this paper, we consider the softmax function (Eqn. (2)) as the readout function. There are various choices to measure the similarity between the output softmax vectors. We consider Jensen–Shannon divergence (JSD) in our experiments.
4 MAIN ALGORITHM: EFFICIENT EDGE UNLEARNING
Given a graph G(V,E) as input, one often finds a proper model represented by θ that fits the data by minimizing an empirical loss. In this paper, we consider cross-entropy loss (Cox, 1958) for node classification as our loss function. The original model θOR is optimized by the following:
θOR = arg min θ
1 |V | ∑ v∈V L(θ; v,E). (4)
Assuming a set of edges EUL is deleted from G and the new graph after this deletion is represented by GUL = G(V,E\EUL), retraining the model will give us a new model parameter θRE on GUL:
θRE = arg min θ
1 |V | ∑ v∈V L(θ; v,E\EUL). (5)
Figure 1 gives an overview of our unlearning solution named EraEdge. A major difficulty, as expected, is that obtaining θRE is prohibitively slow for complex networks and large datasets. To overcome this difficulty, the aim of EraEdge is to identify an update to θOR through an analogous one-shot unlearning update:
θUL = θOR − IEUL , (6) where IEUL is the influence ofEUL on the target model, i.e., the change on the model parameters by EUL. In general, IEUL is aK×d matrix, where K is the number of parameters in θOR (and both θRE and θUL), and d is the dimension of each parameter (i.e. embedding). This update can be interpreted from
the optimization perspective that the model forgets EUL by “reversing” the influence of EUL from the model. The challenge is how to quantify IEUL to achieve the unlearning objective (Eqn. (3)). Next, we discuss the details of how to compute IEUL .
Existing influence functions and their inapplicability. Influence functions (Koh & Liang, 2017) enable efficient approximation of the effect of some particular training points on a model’s prediction. The general idea of influence functions is the following: let θ and θ̂ be the model parameters before and after removing a data point z, the new parameters θ̂ ,z after z is removed can be computed as following:
θ̂ ,z = arg min θ
1
m ∑ zi 6=z L(θ; zi) + L(θ; z), (7)
wherem is the number of data points in the original dataset, and is a small constant. Intuitively, the influence function computes the parameters after removal of z by upweighting z on the parameters with some small .
It seemly sounds that the influence function (Eqn. (7)) can be applied to the edge unlearning setting directly by upweighting those nodes that are included in any edge inEUL. However, this is incorrect as removing one edge e(vi, vj) from the graph can affect not only the prediction of vi and vj but also those of neighboring nodes of vi and vj , due to the aggregation function of GNN models.
4.1 THEORETICAL CHARACTERIZATION OF EDGE INFLUENCE ON GNNS
In general, an `-layer GNN aggregates the information of the `-hop neighborhood of each node. Thus removing an edge e(vi, vj) will affect not only vi and vj but also all nodes in the `-hop neighborhood of vi and vj . To capture such aggregation effect in derivation of edge influence, first, we define the set of nodes (denoted as Ve) that will be affected by removing an edge e(vi, vj) as: Ve = N (vi) ∪N (vj) ∪ {vi, vj}, where N (v) is the set of nodes connected to v in ` hops. Then given a set of edges EUL ⊂ E to be removed, the set of nodes VEUL that will be affected by removing EUL is defined as follows:
VEUL = ⋃
e∈EUL
Ve. (8)
Next, we follow the data perturbation idea of influence functions (Koh & Liang, 2017), and compute the new parameters θ ,EUL after the removal of EUL as follows:
θ ,VEUL = arg minθ
1 |V | ∑ v∈V L(θ; v,E) + ∑ v∈VEUL L(θ; v,E\EUL)− ∑ v∈VEUL L(θ; v,E). (9)
Intuitively, Eqn. (9) approximates the effects that moving mass of perturbation on VEUL with E\EUL in place of E. Then we obtain the following theorem. Theorem 2. Given the parameters θOR obtained by AUL on a graph G, and the loss function L, assume that L is twice-differentiable and convex in θ, then the influence of a set of edges EUL is:
IEUL = −H−1θOR ( ∇θ ∑ v∈VEUL L(θOR; v,E\EUL)−∇θ ∑ v∈VEUL L(θOR; v,E) )
(10)
where HOR := ∇2 1|V | ∑ v∈V L(θOR, v, E) is the Hessian matrix of L with respect to θOR.
The proof of Theorem 2 can be found in Appendix A.1. According to Eqn (9), removing EUL is equivalent to upweighting = 1|V | mass of perturbation. Therefore, θUL = θ ,VEUL when = 1 |V | . Finally, we have a linear approximation of θUL:
θUL ≈ θOR + 1
|V | IEUL .
Dealing with non-convexity of GNNs. Theorem 2 assumes the loss function is convex. Given the non-convexity nature of GNN models, it is hard to reach the global minimum in practice. As a result, the Hessian matrix HθOR may have negative eigenvalues. To address this issue, we adapt the damping term based solution (Koh & Liang, 2017) to prevent HθOR from having negative eigenvalues by adding a damping term to the Hessian matrix, i.e., (HθOR + λI).
4.2 TIME AND MEMORY EFFICIENT INFLUENCE ESTIMATOR
Although by Theorem 2 estimating the edge influence amounts to solving a linear system, there are several practical and theoretical challenges. First, it can well be the case that the Hessian matrix HθOR is non-invertible. This is because our loss function is non-convex with respect to θ. As a consequence, the linear system may even not have a solution. Second, even storing a Hessian matrix in memory (either CPU or GPU) is expensive: in our experiments, we will show that Hessian matrices are huge, e.g. the Hessian matrix on the Physics dataset has size around 106 × 106 which would cost 60 GB memory. Lastly, even under the promise that the linear system is feasible, computing the inverse of such a huge size matrix is prohibitive.
Our second technical contribution thus is an algorithm that resolves all the challenges mentioned above. Claim 3. There is a computationally and memory efficient algorithm to solve the linear system of IEUL in Theorem 2.
The starting point of our algorithm is a novel perspective that solving the linear system (Eqn. (10)) can be thought of as finding a stationary point of the following quadratic function:
f(x) = arg min x
1 2 xTAx− bTx, (11)
withA = HθOR and b = ∇θ ∑ v∈VEUL L(θOR; v,E\EUL)−∇θ ∑ v∈VEUL
L(θOR; v,E). Note that even the function f(x) is non-convex, there is rich literature establishing convergence guarantee to stationary points using gradient-descent-type algorithms; see e.g. (Bertsekas, 1999).
In this paper, we will employ the conjugate gradient (CG) method which exhibits promising computational efficiency for minimizing quadratic functions (Pytlak, 2008). In fact, it was well-known that as long as the step size satisfies the Wolfe conditions (Wolfe, 1969; 1971) and the objective function is Lipschitz and bounded from below, the sequence of iterates produced by CG asymptotically converges to a stationary point of f(x), which corresponds to a solution IEUL that satisfies Eqn. (10). Note that these regularity conditions are satisfied as soon as the training data are bounded. Hence, we have the following convergence guarantee. Lemma 4 (Theorem 2.1 of (Pytlak, 2008)). The CG method generates a sequence of iterates {xt}t≥1 such that limt→+∞ f(xt) = 0. In addition, the per-iteration time complexity is O(|x|) where|x| denotes the dimension of x.
We note, however, that an appealing feature of Eqn. (10) is that we do not have to find a solution with exact zero gradient. This enables us to terminate CG early by monitoring the magnitude of the gradients. Our empirical study also shows that CG can get good approximation in a small number of iterations.
In addition, we propose a memory-efficient implementation of CG, which significantly reduces the memory cost. Lemma 5. The CG method can be implemented using O(|θ|) memory.
Proof. To see why the above lemma holds, recall that a key step of CG update is calculating the gradient of f(x) as
∇f(x) = HθORx− ( ∇θ ∑ v∈VEUL L(θOR; v,E\EUL)−∇θ ∑ v∈VEUL L(θOR; v,E) ) .
As HθOR ∈ R|θ|×|θ|, we can not explicitly compute HθOR . Instead, we utilize Hessian-vector product (Pearlmutter, 1994) to approximately calculate HθORx by
HθORx ≈ g(θOR + rx)− g(θOR)
r , (12) for some very small step size r > 0, where g(θ) := ∇θ ∑ v∈VEUL
L(θOR; v,E\EUL) − ∇θ ∑ v∈VEUL
L(θOR; v,E). Note that the memory cost of evaluating the function value of g(·) is O(|θ|). Hence, Lemma 5 follows.
Remark 6. Observe that a trivial implementation involves storing the Hessian matrix which consumes O(|θ|2) memory. Returning to our previous example on the Physics dataset, a trivial implementation consumes 64 GB memory, while ours only needs 8 GB memory.
Proof of Claim 3. Claim 3 follows from Lemma 4 and Lemma 5.
5 EXPERIMENTS
In this section, we empirically verify the efficiency and effectiveness of our unlearning method.
5.1 EXPERIMENTAL SETUP
All the experiments are executed on a GPU server with NVIDIA A100 (40G). All the algorithms are implemented in Python with PyTorch. We set the damping term λ = 0.01 for all experiments. The link to the code and datasets will be available in the camera-ready version.
Datasets. We use three well-known datasets, namely Cora (Sen et al., 2008), Citeseer (Yang et al., 2016), and CS (Shchur et al., 2018), that are popularly used for performance evaluation of GNNs (Shchur et al., 2018; Zhang et al., 2019). The statistical information of these datasets can be found in Appendix A.2.
GNN models. We consider four representative GNN models, namely GCN (Kipf & Welling, 2017), GAT (Veličković et al., 2018), GraphSAGE (Hamilton et al., 2017), and GIN (Xu et al., 2019). We configure the GNNs with one hidden layer and a softmax output layer. All GNN models are trained for 1,000 epochs with an early-stopping condition when the validation loss is not decreasing for 20 epochs. We randomly split each graph into a training set (60%), a validation set (20%), and a test set (20%). As we mainly consider the impact of structure change on GNN models, we randomly initialize the values of node features such that they follow the Gaussian distribution to eliminate the possible dominant impact of node features on model performance. More details of the model setup can be found in Appendix A.2. We also measure the model performance with original node features. The results can be found in Appendix A.5.
Picking edges for removal. We randomly pick k ={100, 200, 400, 600, 800, 1,000}) edges from Cora and CiteSeer datasets, and k={1,000, 2,000, 4,000, 6,000, 8,000, 10,000}) edges from CS dataset for removal. For each setting, we randomly sample ten batches of edges, with each batch containing k edges. We report the average of model performance (model accuracy, unlearning efficacy, etc.) of the ten batches.
Metrics. We evaluate the performance of EraEdge in terms of efficiency, efficacy, and model accuracy: (1) Unlearning efficiency: we measure the running time of EraEdge and retraining time for a given set of edges; (2) Target model accuracy: we measure accuracy of node classification, i.e., the percentage of nodes that are correctly classified by the model, as the accuracy of the target model. Higher accuracy indicates better accuracy retained by the unlearned model; (3) Unlearning efficacy: we measure the distance between the output space of both retrained and unlearned models as the Jensen–Shannon divergence (JSD) between the posterior distributions output by these two models. Smaller JSD indicates a higher similarity between the two models in terms of their outputs.
Baselines. We consider both baselines of exact and approximate GNN unlearning for comparison with EraEdge. For exact GNN unlearning, we consider GraphEraser (Bourtoule et al., 2021) as the baseline. GraphEraser has two partitioning strategies denoted as balanced LPA (BLPA) and balanced embedding k-means (BEKM), We consider both BLPA and BEKM as the baseline methods. We use the same setting of number of shards as in (Chen et al., 2022) for both BLPA and BEKM. For approximate GNN unlearning, we consider (Chien et al., 2022) as the baseline.
5.2 PERFORMANCE OF ERAEDGE We evaluate the performance of EraEdge on four representative GNN models and three graph datasets, and compare the performance of the unlearned model with both the retrained model and two baselines in terms of model accuracy, unlearning efficiency, and unlearning efficacy.
Model accuracy. We report the results of GNN model accuracy in Table 2 (Accuracy column) for GCN+Cora and GraphSAGE+CS settings. The results for other settings can be found in Appendix A.3. We have the following observations. First, the model accuracy obtained by EraEdge stays very close to that of the retrained model, regardless of the number of the removed edges. The difference in model accuracy between retrained and unlearned models remains negligible (in range of [0.48%, 0.52%] and [0.01%, 0.2%] for the two settings respectively). Second, EraEdge witnesses significantly higher model accuracy compared to the two baseline approaches, especially for the GCN+Cora setting. For example, both BEKM and BLPA only can deliver the model accuracy as around 48% when removing 200 edges under the GCN+Cora setting. This shows that unlearning through graph partitioning can bring significant loss of target model accuracy. Meanwhile EraEdge demonstrates that the model accuracy can be as high as∼79% (65% improvement). Unlearning efficiency. We report the time performance results of EraEdge and retraining in Table 2 (Running time. column) for GCN+Cora and GraphSAGE+CS settings. The results of other settings can be found in Appendix A.3. We measure the running time of the two baselines as the average training time per shard, as all shards are trained in parallel. The most important observation is that EraEdge is significantly faster than retraining. For example, it speeds up the training time by 5× under GCN+Cora setting when removing 1,000 edges, and 77× under GraphSAGE+CS setting when removing 2,000 edges. Furthermore, EraEdge is much faster than the two baselines especially when training large graphs. For example, EraEdge is 5.8× faster than BLPA and 3.5× faster than BEKM under the GraphSAGE+CS setting when 2,000 edges were removed.
Unlearning efficacy. Figure 2 plots the results of unlearning efficacy which is measured as the JSD between the posterior probability output by both retraining and unlearning models. We observe that JSD remains insignificant (at most 0.02) in all the settings. Furthermore, JSD stays relatively stable when the number of removed edges increase. This demonstrates the efficacy of EraEdge - it remains close to the retraining model even when a large number of edges is removed.
Main takeaway. While demonstrating similar accuracy as retraining, EraEdge is significantly faster than retraining, where the speedup gain becomes more outstanding when more edges are
100 0.5913 0.5446 0.5297 0.6179 0.5615 0.5523
200 0.6014 0.5486 0.5471 0.5946 0.5659 0.5498
400 0.5978 0.5383 0.5378 0.5934 0.5400 0.5368
600 0.5993 0.5360 0.5383 0.6055 0.5471 0.5475
removed. Furthermore, EraEdge outperforms the baseline approaches considerably in both model accuracy and time performance.
5.3 TESTING OF EDGE FORGETTING THROUGH MEMBERSHIP INFERENCE ATTACKS
To empirically evaluate the extent to which the unlearned model has forgotten the removed edges, we launch a black-box edge membership inference attack (MIA) (Wu et al., 2022)1 that predicts whether particular edges exist in the training graph. We measure the attack performance as AUC of MIA. Intuitively, an AUC that is close to 50% indicates that MIA’s belief of edge existence is close to random guess.
Table 3 reports the attack performance of MIA’s inference of the removed edges EUL against both the original model and the retrained/unlearned models on Cora dataset. First, MIA is effective to predict the existence of EUL in the original graph, as the AUC of MIA against the original model is much higher than 0.5. Second, the ability of MIA inferring EUL from either the retrained or the unlearned model degrades, as the AUC of MIA on both retrained and unlearned models is noticeably reduced. Indeed, the AUC of MIA for both retrained and unlearned models remain close to each other. This demonstrates that the extent to which EraEdge forgets EUL is similar to that of the retrained model.
5.4 COMPARISON WITH CERTIFIED GRAPH UNLEARNING
In this part of the experiments, we compare the performance of EraEdge with certified graph unlearning (CGU) (Chien et al., 2022). The key idea of the certified unlearning method is to add noise drawn from the Gaussian distribution to the loss function. We use µ = 0 and σ = 1 as the mean and standard deviation of the Gaussian distribution. We compare CGU and EraEdge in terms of: (1) target model accuracy, (2) unlearning efficacy (measured as the JSD between the probability output
1We use the implementation of LinkTeller available at: https://github.com/AI-secure/LinkTeller.
100 0.5913 0.5446 0.5329 0.5297
200 0.6014 0.5486 0.5485 0.5471
400 0.5978 0.5383 0.5343 0.5378
600 0.5993 0.5360 0.5434 0.5383
of retraining and unlearning models), and (3) privacy vulnerability of the removed edges against the membership inference attack.
Figure 3 (a) reports the target model accuracy by CGU and EraEdge. As we can see, while EraEdge enjoys similar target model accuracy as the retrained model, CGU suffers from significant loss of model accuracy due to added noise, where the model accuracy is 50% worse than that of both EraEdge and retraining.
Figure 3 (b) reports unlearning efficacy by CGU and EraEdge. The results demonstrate that the model output by CGU is much farther away from that of the retrained model than EraEdge. This is consistent with the low accuracy results in Figure 3 (b).
Table 4 shows the ability of forgetting the removed edges EUL by both CGU and EraEdge, where the edge forgetting ability is measured as the accuracy (AUC) of the membership inference attack that predicts EUL in the training graph. We use the same membership inference attack (Wu et al., 2022) as in Section 5.3. The reported results are calculated as the average AUC of ten MIA trials. We observe that CGU and EraEdge has comparable edge forgetting ability, where MIA performance against both models is close. This demonstrate empirically that EraEdge provides similar privacy risks as CGU. As it has been shown above that the target model accuracy by EraEdge outperforms that of CGU significantly, we believe that EraEdge better addresses the trade-off between unlearning efficacy, privacy, and model accuracy.
6 RELATED WORK
Machine unlearning. Machine unlearning aims to remove some specific information from a pretrained ML model. Several attempts have been made to make unlearning more efficient than retraining from scratch. An earlier study converts ML algorithms to statistical query (SQ) learning, so that
unlearning processes only need to retrain the summation of SQ learning (Cao & Yang, 2015). The concept of SISA (sharded, isolated, sliced, and aggregated) approach is proposed recently (Bourtoule et al., 2021) where a set of constituent models, trained on disjoint data shards, are aggregated to form an ensemble model. Given an unlearning request, only the affected constituent model is retrained. Alternative machine unlearning solutions directly modify the model’s parameters to unlearn in a small number of updates (Guo et al., 2020; Neel et al., 2021; Sekhari et al., 2021). Recent studies have focused on various convex ML models including random forest (Brophy & Lowd, 2021; Schelter et al., 2021), k-means clustering (Ginart et al., 2019), and Bayesian inference models (Fu et al., 2021).
Machine unlearning in deep neural networks. Early work on deep machine unlearning focuses on removing the information from the network weights by imposing a condition of SGD based optimization during training (Golatkar et al., 2020a). The subsequent work (Golatkar et al., 2020b) estimates the network weights for the unlearned mode. However, all these methods suffer from high computational costs and constraints on the training process (Tarun et al., 2021). The amnesiac unlearning approach (Graves et al., 2021) focuses on Convolutional Neural Networks. It cancels parameter updates from only the batches containing the removed data. However, it assumes that the data to be removed is known before the training of the original model, which does not hold in our setting where edge removal requests are unknown and unpredictable. There also has been recent empirical and theoretical work in developing deep network unlearning in the application domain of computer vision (Du et al., 2019; Nguyen et al., 2020). GraphEraser (Chen et al., 2022) is one of the few works that consider unlearning in GNNs. It follows the SISA approach (Bourtoule et al., 2021) and splits graph into disjoint partitions (shards). Upon receiving an unlearning request, only the model on the affected shards is retrained. However, as splitting the training graph into disjoint partitions will damage the original graph structure, GraphEraser could downgrade the accuracy of the unlearned model significantly, especially when a large number of edges is to be removed. This has been demonstrated in our experiments.
Certified machine unlearning. Certified removal (Guo et al., 2020) defines approximate unlearning with a privacy guarantee (indistinguishability of unlearned models with retrained models), where indistinguishability is defined in a similar manner as differential privacy (Dwork et al., 2006). Certified removal can be realized by adding noise sampled from either Gaussian distribution or Laplace distribution on the weights (Golatkar et al., 2020a; Wu et al., 2020; Neel et al., 2021; Golatkar et al., 2021; Sekhari et al., 2021), or adding perturbation on the loss function (Guo et al., 2020). (Chien et al., 2022) provides the first certified GNN unlearning solution. It only considers simple graph convolutions (SGC) and their generalized PageRank (GPR) extensions. To achieve a theoretical guarantee for certified removal, it adds noise to the loss function. However, as shown in our empirical evaluation (Section 5), the certified unlearning leads to significant loss of target model due to the added noise.
Explanations of deep ML models by influence functions. One of the challenges of deep ML models is its non-transparency that hinders understanding of the prediction results. Recent works (Koh & Liang, 2017) adapt the concept of influence function - a classic technique from robust statistics — to formalize the impact of a training point on a prediction. Broadly speaking, the influence function attempts to estimate the change in the model’s predictions if a particular training point is removed. Very recently, the concept of influence function has been extended to GNNs. For instance, influence functions are designed for GNNs to measure feature-label influence and label influence (Wang et al., 2019). Node-pair influence, i.e., the change in the prediction for node u if the features of the other node v are reweighted, is also studied (Wu et al., 2022). Unlike these works, we estimate the edge influence, i.e., the effect of removing particular edges on GNN models.
7 CONCLUSION
In this work, we study the problem of edge unlearning that aims to remove a set of target edges from GNNs. We design an approximate unlearning algorithm named EraEdge which enables fast yet effective edge unlearning in GNNs. An extensive set of experiments on four representative GNN models and three benchmark graph datasets demonstrates that EraEdge can achieve significant speedup gains over retraining without sacrificing the model accuracy too much.
There are several research directions for the future work. First, while EraEdge only considers edge unlearning, it can be easily extended to handle node unlearning, as removing a node v from a graph
is equivalent to removing all the edges that connect with v in the graph. We will investigate the feasibility and performance of node unlearning through EraEdge, and compare the performance with the existing node unlearning methods (Chien et al., 2022). Second, an important metric of unlearning performance is unlearning capacity, i.e., the maximum number of edges that can be deleted while still ensuring good model accuracy. We will investigate how EraEdge can be tuned to meet the capacity requirement. Third, we will extend the study to a relevant topic, continual learning (CL), which studies how to learn from an infinite stream of data, so that the acquired knowledge can be used for future learning (Chen & Liu, 2018). An interesting question is how to support both continual learning (Chen & Liu, 2018) and private unlearning (CLPU) (Liu et al., 2022), i.e., the model learns and remembers permanently the data samples at large, and forgets specific samples completely and privately. We will explore how to extend EraEdge to support CLPU.
A APPENDIX
A.1 PROOF OF THEOREM 2
Proof. For simplicity, we first define R(θ, V,E) = ∑ v∈V L(θ, v, E). (13)
Then, we formulate a GNN learning process as
θOR = arg min θ
1
|V | R(θ, V,E). (14)
Since removing edges can be considered as perturbing the input, we introduce Eqn 9,
θ = arg min θ
1 |V | ∑ v∈V L(θ; v,E) + ∑
v∈VEUL
L(θ; v,E\EUL)− ∑
v∈VEUL
L(θ; v,E)
= arg min θ
1
|V | R(θ, V,E) + R(θ, VEUL , E\EUL)− R(θ, VEUL , E). (15)
We note a necessary condition is that the gradient of Eqn 15 at θ is zero. Then, we have
0 = 1
|V | ∇θR(θ , V, E) + ∇θR(θ , VEUL , E\EUL)− ∇θR(θ , VEUL , E). (16)
Next, we apply Taylor series at θOR and we get
0 ≈ 1 |V | ∇θR(θOR, V, E) + ∇θR(θOR, VEUL , E\EUL)− ∇θR(θOR, VEUL , E)
+ [ 1 |V | ∇2θR(θOR, V, E) + ∇2θR(θOR, VEUL , E\EUL)− ∇2θR(θOR, VEUL , E) ] (θ − θOR),
(17)
where we have dropped o(θOR − θ ) for approximation. Then Eqn (17) is a linear system of EUL, the influence of EUL. Since θOR is the minimum of Eqn (14), we have 1|V |∇R(θOR, V, E) = 0. As is a small value, we drop the two o( ) terms and have the following:
1
|V | ∇2θR(θOR, V, E)(θ −θOR)+
( ∇θR(θOR, VEUL , E\EUL)−∇θR(θOR, VEUL , E) ) ≈ 0. (18)
Suppose Eqn (14) is convex, then
θ −θOR ≈ − 1
|V | ∇2θR(θOR, V, E)−1
( ∇θR(θOR, VEUL , E\EUL)−∇θR(θOR, VEUL , E) ) (19)
Denote
IEUL := d(θ − θOR)
d
∣∣∣ =0 = −H−1θOR ( ∇θR(θOR, VEUL , E\EUL)−∇θR(θOR, VEUL , E) ) (20)
where HOR := ∇2 1|V | ∑ v∈V L(θOR, v, E).
A.2 ADDITIONAL DETAILS OF EXPERIMENTAL SETUP
Description of datasets. Table 5 summarizes the statistical information of the three graph datasets (Cora, Citeseer, and CS) we used in the experiments. Cora and Citeseer datasets are citation graphs, while CS dataset is a co-author graph.
Additional details of model setup. To ensure fair comparison between the retrained and unlearned models, we use the same model size (i.e., same number of layers and number of neurons) for both retraining and unlearned models. All GNN models are trained with a learning rate of 0.001. We train the models by 1,000 epochs, with the early-stopping condition as that the validation loss does not decrease for 20 epochs.
A.3 ADDITIONAL PERFORMANCE RESULTS
Model efficiency. Figure 4 presents the model efficiency results on the three datasets. We observe that EraEdge is significantly faster than retraining. For example, EraEdge outperforms by 9.95×, 5.41×, 69.36×, and 3.12× on CS dataset respectively over retraining (Figure 4 (c), (f), (i), and (l)). Model accuracy. Figure 5 presents the results of model accuracy for all settings. First, the accuracy of the target model by EraEdge is very close to that by the retrained model. In particular, the average difference in model accuracy between retrained and unlearned models are in the range of [0.11%, 0.68%], [0.02%, 0.74%], [0.06%, 0.65%] and [0.07%, 1.00%] for GCN, GAT, GraphSAGE, and GIN on Cora, [0.01%, 0.71%], [0.05%, 0.44%, [0.05%, 0.65%], and [0.02%, 1.25%] on CiteSeer, and [0.02%, 0.22%], [0.01%, 0.20%, [0.05%, 0.23%], and [0.01%, 0.22%] on CS, respectively. Furthermore, the model accuracy of the unlearned model stays close to that of the retrained model, regardless of the number of removed edges. This demonstrates that EraEdge can handle the removal of a large number of edges.
A.4 SEQUENTIAL UNLEARNING (NEW)
So far we only considered deleting of one batch of edges. In practice, there can be multiple batch deletion requests to forget the edges in a sequential fashion. Next, we focus on the scenario where multiple edge batches are removed sequentially. Specifically, we divide the to-be-removed EUL into k > 1 disjoint batches {Bi}ki=1, with each batch consisting of the same number of edges. For each batch Bi (1 ≤ i ≤ k − 1), we consider the target model obtained from retraining/unlearning of the previous batch Bi−1 as the original model θOR, and update θOR by removing Bi (either by retraining or unlearning). We evaluate the target model accuracy under sequential unlearning and compare it with that under one-batch unlearning.
We consider k = 4, and reports the target model accuracy for deletingEUL in one batch and deleting EUL in k = 4 batches in Table 6. We also report the target model accuracy of the retrained and unlearned models at each batch. We observe that, first, the accuracy of the unlearned model remains close to the retrained model at each batch during sequential removals. Second, the performance of the unlearned model after removing k batches stays close to that of the model after single-batch unlearning. These results demonstrate that EraEdge can handle sequential deletion of multiple batches of edges.
A.5 UNLEARNING WITH NODE FEATURES (NEW)
In Section 5 we mainly considered the node features that are randomly initialized to eliminate the possible dominant impact of node features on model performance. Next, we evaluate the perfor-
Retrain EraEdge
GCN
100 200 400 600 800 1000 Number of unlearned edges
0
1
2
3
4
5
Un le
ar ni
ng ti
m e
(s ec
on ds
)
(a) Cora
100 200 400 600 800 1000 Number of unlearned edges
0.0
0.5
1.0
1.5
2.0
2.5
3.0
Un le
ar ni
ng ti
m e
(s ec
on ds
)
(b) CiteSeer
1000 2000 4000 6000 8000 10000 Number of unlearned edges
0
5
10
15
20
Un le
ar ni
ng ti
m e
(s ec
on ds
)
(c) CS
GAT
100 200 400 600 800 1000 Number of unlearned edges
0
1
2
3
4
5
6
7
Un le
ar ni
ng ti
m e
(s ec
on ds
)
(d) Cora
100 200 400 600 800 1000 Number of unlearned edges
0
1
2
3
4
5
Un le
ar ni
ng ti
m e
(s ec
on ds
)
(e) CiteSeer
1000 2000 4000 6000 8000 10000 Number of unlearned edges
0
5
10
15
20
Un le
ar ni
ng ti
m e
(s ec
on ds
)
(f) CS
SAGE
100 200 400 600 800 1000 Number of unlearned edges
0
2
4
6
8
10
12
Un le
ar ni
ng ti
m e
(s ec
on ds
)
(g) Cora
100 200 400 600 800 1000 Number of unlearned edges
0
2
4
6
8
Un le
ar ni
ng ti
m e
(s ec
on ds
)
(h) CiteSeer
1000 2000 4000 6000 8000 10000 Number of unlearned edges
0
10
20
30
40
50
60
Un le
ar ni
ng ti
m e
(s ec
on ds
)
(i) CS
GIN
Retrain EraEdge
GCN
0 100 200 400 600 800 1000 Number of unlearned edges
0.75
0.76
0.77
0.78
0.79
M od
el a
cc ur
ac y
(a) Cora
0 100 200 400 600 800 1000 Number of unlearned edges
0.54
0.55
0.56
0.57
0.58
0.59
0.60
M od
el a
cc ur
ac y
(b) CiteSeer
0 1000 2000 4000 6000 8000 10000 Number of unlearned edges
0.8550
0.8575
0.8600
0.8625
0.8650
0.8675
0.8700
0.8725
M od
el a
cc ur
ac y
(c) CS
GAT
0 100 200 400 600 800 1000 Number of unlearned edges
0.745
0.750
0.755
0.760
0.765
0.770
M od
el a
cc ur
ac y
(d) Cora
0 100 200 400 600 800 1000 Number of unlearned edges
0.56
0.57
0.58
0.59
0.60
0.61
M od
el a
cc ur
ac y
(e) CiteSeer
0 1000 2000 4000 6000 8000 10000 Number of unlearned edges
0.8475
0.8500
0.8525
0.8550
0.8575
0.8600
0.8625
0.8650
M od
el a
cc ur
ac y
(f) CS
GraphSAGE
0 100 200 400 600 800 1000 Number of unlearned edges
0.76
0.77
0.78
0.79
M od
el a
cc ur
ac y
(g) Cora
0 100 200 400 600 800 1000 Number of unlearned edges
0.55
0.56
0.57
0.58
0.59
0.60
0.61
M od
el a
cc ur
ac y
(h) CiteSeer
0 1000 2000 4000 6000 8000 10000 Number of unlearned edges
0.8550 0.8575 0.8600 0.8625 0.8650 0.8675 0.8700 0.8725 0.8750 M od el a cc ur ac y
(i) CS
GIN
mance of EraEdge that uses the original node features. Figure 6 reports the target model accuracy of the unlearned model that is trained with or without the node features. We have the following main observations. First, the target model accuracy improves significantly by considering the original node features. This shows that the node features have dominant importance on the target model performance in this setting. However, the target model accuracy of the unlearned model still stays close to that of the retrained model. In other words, EraEdge still can make GNNs forget the edges effectively even when node features have dominant importance over the graph structure on model performance. | 1. What is the focus and contribution of the paper on machine unlearning?
2. What are the strengths and weaknesses of the proposed approach, particularly regarding its definition of unlearning and privacy concerns?
3. How does the paper compare with other works in terms of related work discussion and experimental evaluation?
4. What are the limitations of the proposed method, especially regarding its heuristic-based nature and lack of theoretical guarantee?
5. How can the authors improve the privacy guarantees of the proposed method, for example, by incorporating differential privacy or conducting extensive experiments against membership inference attacks? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The authors study the machine unlearning problem on graphs, where they focus on the edge unlearning problem. The proposed method, EraEdge, is based on subtracting the influence of the unlearned edges in a heuristic manner. The experiments demonstrate the efficiency and accuracy of the proposed method. They also show that the output of the unlearned model is close to the one retraining from scratch.
Strengths And Weaknesses
Strengths
EraEdge can be applied to non-linear models such as GNNs.
The methods seem to be efficient in terms of time complexity.
The problem of machine unlearning on graphs is an important problem.
Weaknesses
The proposed unlearning definition is not rigorous and heuristic-based. Also, having approximately close model output does not guarantee privacy.
The proposed method is an approximate unlearning method without theoretical guarantee (i.e. heuristic based).
The related works about machine unlearning on graphs are not extensive enough. Also, differential privacy based GNNs should also be discussed.
In the experiment, the authors use random Gaussian vectors as node features instead of the default features. How does the result look like if we use the default features?
(Minor) The paper focuses on the edge unlearning problem, while the really important problem is the node unlearning problem.
Detail comments
While the problem of machine unlearning on graphs is very important, one should be extremely careful when they claim a method can achieve unlearning. Machine unlearning can be roughly characterized into two categories, exact and approximate unlearning. For exact unlearning methods, we require the unlearned models to be identical (in distribution) to the one retraining from scratch, examples include sharding-based method as mentioned by the authors. Approximate unlearning requires the unlearned model to be indistinguishable from the one retraining from scratch. Note that one should be extremely careful when defining the indistinguishability, as an inappropriate definition could lead to zero privacy in some cases (see [1] for a simple example). Apparently, the proposed definition of unlearning belongs to approximate unlearning and the authors should clearly specify it. Otherwise, the paper can be misleading to the readers.
One rigorous definition of approximate unlearning is via differential privacy type of definition in the parameter space, which was proposed in [1]. The proposed approximate unlearning method therein comes with a differential privacy type of theoretical guarantee. The other line of work such as [2] proposed a heuristic-based measure, which is more similar to the one proposed in this paper. However, the proposed method in [2] involves privacy noise to blur out the potentially leaked information. In contrast, EraEdge does not add any privacy noise to protect the privacy. Most importantly, the authors of [2] conduct extensive experiments and multiple metrics to examine the effectiveness of unlearning their method. On the other hand, the authors of this paper merely examine the closeness of the model output which can be problematic following the same rationale in the counterexample provided in [1] (i.e. the paragraph in the title Insufficiency of parametric indistinguishability). Hence, I doubt how private it is for EraEdge on the unlearned data. I would suggest the authors conduct similar extensive experiments to [2], or examine the ability of the proposed method against some membership inference attack methods.
Regarding the related works, the authors miss several recent papers about machine unlearning on graphs. Please check the survey paper [3] for a collection of papers about machine unlearning on graphs, where the node unlearning problem is also studied. It is also worth mentioning the recent development of differential private GNNs, as differential private models automatically achieve (approximate) unlearning without any update [1].
In summary, my main concern about the paper is the privacy of the proposed method. Since EraEdge is not an exact unlearning approach, one has to be extremely careful about the choice of indistinguishability. Also, most of the existing approximate machine unlearning methods require adding privacy noise to burr out the information from the approximation error. However, EraEdge does not leverage any privacy noise and thus any approximate error can potentially lead to severe privacy issues (i.e. adversarial attacks). In the empirical evaluation, the authors do not conduct enough experiments to demonstrate the privacy of their EraEdge. This should be done for heuristic-based methods such as those in [2]. Hence, I feel the paper needs a major revision before publishing.
References
[1] Certified data removal from machine learning models, Guo et al., ICML 2020.
[2] Eternal Sunshine of the Spotless Net: Selective Forgetting in Deep Networks, Golatkar et al., CVPR 2020.
[3] A Survey of Machine Unlearning, Nguyen et al., 2022.
Clarity, Quality, Novelty And Reproducibility
Clarity: The authors should specify the difference between exact and approximate unlearning. Note that their main baseline methods such as Retrain and GraphEraser are exact unlearning methods while the proposed method is an approximate unlearning method without theoretical guarantees. Hence, merely comparing the test accuracy and time complexity is not satisfied.
Quality: I have concerns about the privacy of EraEdge method.
Novelty: Most of the analysis and techniques are from the existing literature. The main novelty comes from applying them to graphs, yet the novelty of this extension used in the papers seems limited.
Reproducibility: The authors do not provide their experimental code. Details about hyperparameters are also missing. |
ICLR | Title
Quantized Reinforcement Learning (QuaRL)
Abstract
Recent work has shown that quantization can help reduce the memory, compute, and energy demands of deep neural networks without significantly harming their quality. However, whether these prior techniques, applied traditionally to imagebased models, work with the same efficacy to the sequential decision making process in reinforcement learning remains an unanswered question. To address this void, we conduct the first comprehensive empirical study that quantifies the effects of quantization on various deep reinforcement learning policies with the intent to reduce their computational resource demands. We apply techniques such as post-training quantization and quantization aware training to a spectrum of reinforcement learning tasks (such as Pong, Breakout, BeamRider and more) and training algorithms (such as PPO, A2C, DDPG, and DQN). Across this spectrum of tasks and learning algorithms, we show that policies can be quantized to 6-8 bits of precision without loss of accuracy. We also show that certain tasks and reinforcement learning algorithms yield policies that are more difficult to quantize due to their effect of widening the models’ distribution of weights and that quantization aware training consistently improves results over post-training quantization and oftentimes even over the full precision baseline. Additionally, we show that quantization aware training, like traditional regularizers, regularize models by increasing exploration during the training process. Finally, we demonstrate usefulness of quantization for reinforcement learning. We use half-precision training to train a Pong model 50% faster, and we deploy a quantized reinforcement learning based navigation policy to an embedded system, achieving an 18× speedup and a 4× reduction in memory usage over an unquantized policy.
N/A
Recent work has shown that quantization can help reduce the memory, compute, and energy demands of deep neural networks without significantly harming their quality. However, whether these prior techniques, applied traditionally to imagebased models, work with the same efficacy to the sequential decision making process in reinforcement learning remains an unanswered question. To address this void, we conduct the first comprehensive empirical study that quantifies the effects of quantization on various deep reinforcement learning policies with the intent to reduce their computational resource demands. We apply techniques such as post-training quantization and quantization aware training to a spectrum of reinforcement learning tasks (such as Pong, Breakout, BeamRider and more) and training algorithms (such as PPO, A2C, DDPG, and DQN). Across this spectrum of tasks and learning algorithms, we show that policies can be quantized to 6-8 bits of precision without loss of accuracy. We also show that certain tasks and reinforcement learning algorithms yield policies that are more difficult to quantize due to their effect of widening the models’ distribution of weights and that quantization aware training consistently improves results over post-training quantization and oftentimes even over the full precision baseline. Additionally, we show that quantization aware training, like traditional regularizers, regularize models by increasing exploration during the training process. Finally, we demonstrate usefulness of quantization for reinforcement learning. We use half-precision training to train a Pong model 50% faster, and we deploy a quantized reinforcement learning based navigation policy to an embedded system, achieving an 18× speedup and a 4× reduction in memory usage over an unquantized policy.
1 INTRODUCTION
Deep reinforcement learning has promise in many applications, ranging from game playing (Silver et al., 2016; 2017; Kempka et al., 2016) to robotics (Lillicrap et al., 2015; Zhang et al., 2015) to locomotion and transportation (Arulkumaran et al., 2017; Kendall et al., 2018). However, the training and deployment of reinforcement learning models remain challenging. Training is expensive because of their computationally expensive demands for repeatedly performing the forward and backward propagation in neural network training. Deploying deep reinforcement learning (DRL) models is prohibitively expensive, if not even impossible, due to the resource constraints on embedded computing systems typically used for applications, such as robotics and drone navigation.
Quantization can be helpful in substantially reducing the memory, compute, and energy usage of deep learning models without significantly harming their quality (Han et al., 2015; Zhou et al., 2016; Han et al., 2016). However, it is unknown whether the same techniques carry over to reinforcement learning. Unlike models in supervised learning, the quality of a reinforcement learning policy depends on how effective it is in sequential decision making. Specifically, an agent’s current input and decision heavily affect its future state and future actions; it is unclear how quantization affects the long-term decision making capability of reinforcement learning policies. Also, there are many different algorithms to train a reinforcement learning policy. Algorithms like actor-critic methods (A2C), deep-q networks (DQN), proximal policy optimization (PPO) and deep deterministic policy gradients (DDPG) are significantly different in their optimization goals and implementation details, and it is unclear whether quantization would be similarly effective across these algorithms. Finally, reinforcement learning policies are trained and applied to a wide range of environments, and it is unclear how quantization affects performance in tasks of differing complexity.
Here, we aim to understand quantization effects on deep reinforcement learning policies. We comprehensively benchmark the effects of quantization on policies trained by various reinforcement learning algorithms on different tasks, conducting in excess of 350 experiments to present representative and conclusive analysis. We perform experiments over 3 major axes: (1) environments (Atari Arcade, PyBullet, OpenAI Gym), (2) reinforcement learning training algorithms (Deep-Q Networks, Advantage Actor-Critic, Deep Deterministic Policy Gradients, Proximal Policy Optimization) and (3) quantization methods (post-training quantization, quantization aware training).
We show that quantization induces a regularization effect by increasing exploration during training. This motivates the use of quantization aware training, which we show demonstrates improved performance over post-training quantization and oftentimes even over the full precision baseline. Additionally, We show that deep reinforcement learning models can be quantized to 6-8 bits of precision without loss in quality. Furthermore, we analyze how each axis affects the final performance of the quantized model to develop insights into how to achieve better model quantization. Our results show that some tasks and training algorithms yield models that are more difficult to apply post-training quantization as they widen the spread of the models’ weight distribution, yielding higher quantization error. To demonstrate the usefulness of quantization for deep reinforcement learning, we 1) use half precision ops to train a Pong model 50% faster than full precision training and 2) deploy a quantized reinforcement learning based navigation policy onto an embedded system and achieve an 18× speedup and a 4× reduction in memory usage over an unquantized policy.
2 RELATED WORK
Reducing neural network resource requirements is an active research topic. Techniques include quantization (Han et al., 2015; 2016; Zhu et al., 2016; Jacob et al., 2018; Lin et al., 2019; Polino et al., 2018; Sakr & Shanbhag, 2018), deep compression (Han et al., 2016), knowledge distillation (Hinton et al., 2015; Chen et al., 2017), sparsification (Han et al., 2016; Alford et al., 2018; Park et al., 2016; Louizos et al., 2018b; Bellec et al., 2017) and pruning (Alford et al., 2018; Molchanov et al., 2016; Li et al., 2016). These methods are employed because they compress to reduce storage and memory requirements as well as enable fast and efficient inference and training with specialized operations. We provide background for these motivations and describe the specific techniques that fall under these categories and motivate why quantization for reinforcement learning needs study.
Compression for Memory and Storage: Techniques such as quantization, pruning, sparsification, and distillation reduce the amount of storage and memory required by deep neural networks. These techniques are motivated by the need to train and deploy neural networks on memoryconstrained environments (e.g., IoT or mobile). Broadly, quantization reduces the precision of network weights (Han et al., 2015; 2016; Zhu et al., 2016), pruning removes various layers and filters of a network (Alford et al., 2018; Molchanov et al., 2016), sparsification zeros out selective network values (Molchanov et al., 2016; Alford et al., 2018) and distillation compresses an ensemble of networks into one (Hinton et al., 2015; Chen et al., 2017). Various algorithms combining these core techniques have been proposed. For example, Deep Compression (Han et al., 2015) demonstrated that a combination of weight-sharing, pruning, and quantization might reduce storage requirements by 35-49x. Importantly, these methods achieve high compression rates at small losses in accuracy by exploiting the redundancy that is inherent within the neural networks.
Fast and Efficient Inference/Training: Methods like quantization, pruning, and sparsification may also be employed to improve the runtime of network inference and training as well as their energy consumption. Quantization reduces the precision of network weights and allows more efficient quantized operations to be used during training and deployment, for example, a ”binary” GEMM (general matrix multiply) operation (Rastegari et al., 2016; Courbariaux et al., 2016). Pruning speeds up neural networks by removing layers or filters to reduce the overall amount of computation necessary to make predictions (Molchanov et al., 2016). Finally, Sparsification zeros out network weights and enables faster computation via specialized primitives like block-sparse matrix multiply (Ren et al., 2018). These techniques not only speed up neural networks but decrease energy consumption by requiring fewer floating-point operations.
Quantization for Reinforcement Learning: Prior work in quantization focuses mostly on quantizing image / supervised models. However, there are several key differences between these models and reinforcement learning policies: an agent’s current input and decision affects its future state and actions, there are many complex algorithms (e.g: DQN, PPO, A2C, DDPG) for training, and there are many diverse tasks. To the best of our knowledge, this is the first work to apply and analyze the performance of quantization across a broad of reinforcement learning tasks and training algorithms.
3 QUANTIZED REINFORCEMENT LEARNING (QUARL)
We develop QuaRL, an open-source software framework that allows us to systematically apply traditional quantization methods to a broad spectrum of deep reinforcement learning models. We use the QuaRL framework to 1) evaluate how effective quantization is at compressing reinforcement learning policies, 2) analyze how quantization affects/is affected by the various environments and training algorithms in reinforcement learning and 3) establish a standard on the performance of quantization techniques across various training algorithms and environments.
Environments: We evaluate quantized models on three different types of environments: OpenAI gym (Brockman et al., 2016), Atari Arcade Learning (Bellemare et al., 2012), and PyBullet (which is an open-source implementation of the MuJoCo). These environments consist of a variety of tasks, including CartPole, MountainCar, LunarLandar, Atari Games, Humanoid, etc. The complete list of environments used in the QuaRL framework is listed in Table 1. Evaluations across this spectrum of different tasks provide a robust benchmark on the performance of quantization applied to different reinforcement learning tasks.
Training Algorithms: We study quantization on four popular reinforcement learning algorithms, namely Advantage Actor-Critic (A2C) (Mnih et al., 2016), Deep Q-Network (DQN) (Mnih et al., 2013), Deep Deterministic Policy Gradients (DDPG) (Lillicrap et al., 2015) and Proximal Policy Optimization (PPO) (Schulman et al., 2017). Evaluating these standard reinforcement learning algorithms that are well established in the community allows us to explore whether quantization is similarly effective across different reinforcement learning algorithms.
Quantization Methods: We apply standard quantization techniques to deep reinforcement learning models. Our main approaches are post-training quantization and quantization aware training. We apply these methods to models trained in different environments by different reinforcement learning algorithms to broadly understand their performance. We describe how these methods are applied in the context of reinforcement learning below.
3.1 POST-TRAINING QUANTIZATION
Post-training quantization takes a trained full precision model (32-bit floating point) and quantizes its weights to lower precision values. We quantize weights down to fp16 (16-bit floating point) and int8 (8-bit integer) values. fp16 quantization is based on IEEE-754 floating point rounding and int8 quantization uses uniform affine quantization.
Fp16 Quantization: Fp16 quantization involves taking full precision (32-bit) values and mapping them to the nearest representable 16-bit float. The IEEE-754 standard specifies 16-bit floats with the format shown below. Bits are grouped to specify the value of the sign (S), fraction (F ) and exponent (E) which are then combined with the following formula to yield the effective value of the float:
Sign Exponent (5 bits) Fraction (10 bits)
Vfp16 = (−1)S × (1 + F
210 )× 2E−15
In subsequent sections, we refer to float16 quantization using the following notation:
Qfp16(W ) = roundfp16(W )
Uniform Affine Quantization: Uniform affine quantization (TensorFlow, 2018b) is applied to a full precision weight matrix and is performed by 1) calculating the minimum and maximum values of the matrix and 2) dividing this range equally into 2n representable values (where n is the number of bits being quantized to). As each representable value is equally spaced across this range, the quantized value can be represented by an integer. More specifically, quantization from full precision to n-bit integers is given by:
Qn(W ) =
⌊ W
δ
⌋ + z where δ =
|min(W, 0)|+ |max(W, 0)| 2n
, z = ⌊ −min(W, 0)
δ ⌋ Note that δ is the gap between representable numbers and z is an offset so that 0 is exactly representable. Further note that we usemin(W, 0) andmax(W, 0) to ensure that 0 is always represented. To dequantize we perform:
D(Wq, δ, z) = δ(Wq − z)
In the context of QuaRL, int8 and fp16 quantization are applied after training a full precision model on an environment, as per Algorithm 1. In post training quantization, uniform quantization is applied to each fully connected layer of the model (per-tensor quantization) and is applied to each channel of convolution weights (per-axis quantization); activations are not quantized. We use post-training quantization to quantize to fp16 and int8 values.
Algorithm 1: Post-Training Quantization for Reinforcement Learning Input: T : task or environment Input: L : reinforcement learning algorithm Input: A : model architecture Input: n : quantize bits (8 or 16) Output: Reward
1 M = Train(T , L, A)
2 Q =
{ Qint8 n = 8
Qfp16 n = 16 3 return Eval(Q(M))
Algorithm 2: Quantization Aware Training for Reinforcement Learning Output: Reward Input: T : task or environment Input: L : reinforcement learning algorithm Input: n : quantize bits Input: A : model architecture Input: Qd : quantization delay
1 Aq = InsertAfterWeightsAndActivations(Qtrainn ) 2 M , TensorMinMaxes =
TrainNoQuantMonitorWeightsActivationsRanges(T , L, Aq , Qd) 3 M = TrainWithQuantization(T , L, M , TensorMinMaxes, Qtrainn ) 4 return Eval(M , Qtrainn , TensorMinMaxes)
3.2 QUANTIZATION AWARE TRAINING
Quantization aware training involves retraining the reinforcement learning policies with weights and activations uniformly quantized to n bit values. Importantly, weights are maintained in full fp32 precision except that they are passed through the uniform quantization function before being used in the forward pass. Because of this, the technique is also known as “fake quantization” (TensorFlow, 2018b). Additionally, to improve training there is an additional parameter, quantization delay (TensorFlow, 2018a), which specifies the number of full precision training steps before enabling quantization. When the number of steps is less than the quantization delay parameter, the minimum and maximum values of weights and activations are actively monitored. Afterwards, the previously
captured minimum and maximum values are used to quantize the tensors (these values remain static from then on). Specifically:
Qtrainn (W,Vmin, Vmax) =
⌊ W
δ
⌋ + z where δ =
|Vmin|+ |Vmax| 2n , z = ⌊ −Vmin δ ⌋ Where Vmin and Vmax are the monitored minimum and maximum values of the tensor (expanding Vmin and Vmax to include 0 if necessary). Intuitively, the expectation is that the training process eventually learns to account for the quantization error, yielding a higher performing quantized model. Note that uniform quantization is applied to fully connected weights in the model (per-tensor quantization) and to each channel for convolution weights (per-axis quantization). n bit quantization is applied to each layer’s weights and activations:
xk+1 = A(Q train n (Wk, Vmin, Vmax)ak + b) where A is the activation function
ak+1 = Q train n (xk+1, Vmin, Vmax)
During backward propagation, the gradient is passed through the quantization function unchanged (also known as the straight-through estimator (Hinton, 2012)), and the full precision weight matrix W is optimized as follows:
∆WQ train n (W,Vmin, Vmax) = I
In context of the QuaRL framework, the policy neural network is retrained from scratch after inserting the quantization functions between weights and activations (all else being equal). At evaluation full precision weights are passed through the uniform affine quantizer to simulate quantization error during inference. Algorithm 2 describes how quantization aware training is applied in QuaRL.
4 RESULTS
In this section, we first show that quantization has regularization effect on reinforcement learning algorithms and can boost exploration. Secondly, We show that reinforcement learning algorithms can be quantized safely without significantly affecting the rewards. To that end, we perform evaluations across the three principal axes of QuaRL: environments, training algorithms, and quantization methods.For post-training quantization, we evaluate each policy for 100 episodes and average the rewards. For Quantization Aware Training (QAT), we train atleast three policies and report the mean rewards over hundred evaluations. Table 1 lists the space of the evaluations explored.
Quantization as Regularization: To further establish the effects of quantization during training, we compare quantization-aware training with traditional regularization techniques (specifically layer-norm (Ba et al., 2016; Kukacka et al., 2017)) and measure the amount of exploration these techniques induce. It has been show in previous literature (Farebrother et al., 2018; Cobbe et al., 2018) that regularization actively helps reinforcement learning training generalize better; here we further reinforce this notion and additionally establish a relationship between quantization, generalization and exploration. We use the variance in action distribution produced by the model as a proxy for exploration: intuitively, since the policy samples from this distribution when performing an action, a policy that produces an action distribution with high variance is less likely to explore different states. Conversely, a low variance action distribution indicates high exploration as the policy is more likely to take a different action than the highest scoring one.
We measure the variance in action distribution produced by differently trained models (QAT2, QAT-4, QAT-6, QAT-8, with layer norm and full precision) at different stages of the training process. We collect model rewards and the action distribution variance over several rollouts with deterministic action selection (model performs the highest scoring action). Importantly, we make sure to use deterministic action selection to ensure that the states reached are similar to the the distribution seen by the model during training. To separate signal from noise, we furthermore smooth the action variances with a smoothing factor of .95 for both rewards and action variances.
Figure 4 shows the variance in action distribution produced by the models at different stages of training. Training with higher quantization levels (e.g: 2 bit vs 4 bit training), like layer norm regularization, induces lower action distribution variance and hence indicates more exploration. Furthermore, figure 4 reward plot shows that despite lower action variance, models trained with quantization achieve a reward similar to the full precision baseline, which indicates that higher exploration is facilitated by quantization and not by a lack of training. Note that quantization is turned on at 5,000,000 steps and we see its effects on the action distribution variance shortly after this point. In summary, data shows that training with quantization, like traditional regularization, in part regularizes reinforcement learning training by facilitating exploration during the training process.
Effectiveness of Quantization: To evaluate the overall effectiveness of quantization for deep reinforcement learning, we apply post-training quantization and quantization aware learning to a spectrum of tasks and record their performance. We present the reward results for post-training quantization in Table 2. We also compute the percentage error of the performance of the quantized policy relative to that of their corresponding full precision baselines (Efp16 and Eint8). Additionally, we report the mean of the errors across tasks for each of the training algorithms.
The absolute mean of 8-bit and 16-bit relative errors ranges between 2% and 5% (with the exception of DQN), which indicates that models may be quantized to 8/16 bit precision without much loss in quality. Interestingly, the overall performance difference between the 8-bit and 16-bit post-training quantization is minimal (with the exception of the DQN algorithm, for reasons we explain in Section 4). We believe this is because the policies weight distribution is narrow enough that 8 bits is able to capture the distribution of weights without much error. In a few cases, post-training quantization yields better scores than the full precision policy. We believe that quantization injected an amount of noise that was small enough to maintain a good policy and large enough to regularize model behavior; this supports some of the results seen by Louizos et al. (2018a); Bishop (1995); Hirose et al. (2018); see appendix for plots showing that there is a sweet spot for post-training quantization.
For quantization aware training, we train the policy with fake-quantization operations while maintaining the same model and hyperparameters (see Appendix). Figure 2 shows the results of quantization aware training on multiple environments and training algorithms to compress the policies down from 8-bits to 2-bits. Generally, the performance relative to the full precision baseline is maintained until 5/6-bit quantization, after which there is a drop in performance. Broadly, at 8-bits, we see no degradation in performance. From the data, we see that quantization aware training achieves higher rewards than post-training quantization and also sometimes outperforms the full precision baseline.
Effect of Environment on Quantization Quality: To analyze the task’s effect on quantization quality we plot the distribution of weights of full precision models trained in three environments (Breakout, Beamrider and Pong) and their error after applying 8-bit post-training quantization on them. Each model uses the same network architecture, is trained using the same algorithm (DQN) with the same hyperparameters (see Appendix).
Figure 3 shows that the task with the highest error (Breakout) has the widest weight distribution, the task with the second-highest error (BeamRider) has a narrower weight distribution, and the task with the lowest error (Pong) has the narrowest distribution. With an affine quantizer, quantizing a narrower distribution yields less error because the distribution can be captured at a fine granularity; conversely, a wider distribution requires larger gaps between representable numbers and thus increases quantization error. The trends indicate the environment affects models’ weight distribution spread which affects quantization performance: specifically, environ-
ments that yield a wider distribution of model weights are more difficult to apply quantization to. This observation suggests that regularizing the training process may yield better performance.
Algorithm Environment fp32 Reward Eint8 Efp16 DQN Breakout 214 63.55% -1.40% PPO Breakout 400 8.00% 0.00% A2C Breakout 379 7.65% 2.11%
Table 3: Rewards for DQN, PPO, and A2C.
-2.0 -1.5 -1.0 -0.5 0.0 0.5 1.0
weight
Fr eq
ue nc
y
105 103 101 Min Weight: -2.21 Max Weight: 1.31
Min Weight: -1.02 Max Weight: 0.58
Min Weight: -0.79 Max Weight: 0.72
DQN
PPO
A2C
105 103 101
105 103 101
Fr eq
ue nc y Fr eq ue nc y
-2.0 -1.5 -1.0 -0.5 0.0 0.5 1.0
-2.0 -1.5 -1.0 -0.5 0.0 0.5 1.0
Figure 4: Weight distributions for the policies trained using DQN, PPO and A2C. DQN policy weights are more spread out and more difficult to cover effectively by 8-bit quantization (yellow lines). This explains the higher quantization error for DQN in Table 3. A negative error indicates that the quantized model outperformed the full precision baseline.
Effect of Training Algorithm on Quantization Quality: To determine the effects of the reinforcement learning training algorithm on the performance of quantized models, we compare the performance of post-training quantized models trained by various algorithms. Table 3 shows the error of different reinforcement learning algorithms and their corresponding 8-bit post-training quantization error for the Atari Breakout game. Results indicate that the A2C training algorithm is most conducive to int8 post-training quantization, followed by PPO2 and DQN. Interestingly, we see a sharp performance drop compared to the corresponding full precision baseline when applying 8-bit post-training quantization to models trained by DQN. At 8 bits, models trained by PPO2 and A2C have relative errors of 8% and 7.65%, whereas the model trained by DQN has an error of ∼64%. To understand this phenomenon, we plot the distribution of model weights trained by each algorithm, shown in Figure 4. The plot shows that the weight distribution of the model trained by DQN is significantly wider than those trained by PPO2 and A2C. A wider distribution of weights indicates a higher quantization error, which explains the large error of the 8-bit quantized DQN model. This also explains why using more bits (fp16) is more effective for the model trained by DQN (which reduces error relative to the full precision baseline from ∼64% down to ∼-1.4%). These results signify that the choice of RL algorithms (on-policy vs off-policy) have different objective functions and hence can result in a completely different weight distribution. A wider distribution has more pronounced impact on the quantization error.
5 CASE STUDIES
To show the usefulness of our results, we use quantization to optimize the training and deployment of reinforcement learning policies. We 1) train a pong model 1.5× faster by using mixed precision optimization and 2) deploy a quantized robot navigation model onto a resource constrained embedded system (RasPi-3b), demonstrating 4× reduction in memory and an 18× speedup in inference. Faster training time means running more experiments for the same time. Achieving speedup on resource-constrained devices enables deployment of the policies on real robots.
Mixed/Half-Precision Training: Motivated by that reinforcement learning training is robust to quantization error, we train three policies of increasing model complexity (Policy A, Policy B, and Policy C) using mixed precision training and compare its performance to that of full precision training (see Appendix for details). In mixed precision training, the policy weights, activations, and gradients are represented in fp16. A master copy of the weights are stored in full precision (fp32) and updates are made to it during backward pass (Micikevicius et al., 2017). We measure the runtime and convergence rate of both full precision and mixed precision training (see Appendix).
Algorithm NetworkParameter fp32
Runtime (min)
MP Runtime
(min) Speedup
DQN-Pong Policy A 127 156 0.87× Policy B 179 172 1.04× Policy C 391 242 1.61×
Table 4: Mixed precision training for reinforcement learning.
0 200k 400k 600k 800k 1M
20 fD 10 F 0 fD -10 fD -20
Policy A Policy B Policy C
Mixed Precision Fp32 Only
step
Re w
ar d
20 fD 10 F 0 fD -10 fD -20
20 fD 10 F 0 fD -10 fD -20 Mixed Precision Fp32 Only
Mixed Precision Fp32 Only
0 200k 400k 600k 800k 1M step 0 200k 400k 600k 800k 1M step
Figure 5: Mixed precision v/s fp32 training rewards.
Figure 5 shows that all three policies converge under full precision and mixed precision training. Interestingly, for Policy B, training with mixed precision yields faster convergence; we believe that
some amount of quantization error speeds up the training process. Table 5 shows the computational speedup to the training loop by using mixed precision training. While using mixed precision training on smaller networks (Policy A) may slow down training iterations (as overhead of doing fp32 to fp16 conversions outweigh the speedup of low precision ops), larger networks (Policy C) show up to a 60% speedup. Generally, our results show that mixed precision may speed up the training process by up to 1.6× without harming convergence. Quantized Policy for Deployment: To show the benefits of quantization in deploying of reinforcement learning policies, we train multiple point-to-point navigation models (Policy I, II, and III) for aerial robots using Air Learning (Krishnan et al., 2019) and deploy them onto a RasPi-3b, a cost effective, general-purpose embedded processor. RasPi-3b is used as proxy for the compute platform for the aerial robot. Other platforms on aerial robots have similar characteristics. For each of these policies, we report the accuracies and inference speedups attained by the int8 and fp32 policies.
Table 5 shows the accuracies and inference speedups attained for each corresponding quantized policy. We see that quantizing smaller policies (Policy I) yield moderate inference speedups (1.18× for Policy I), while quantizing larger models (Policies II, III) can speed up inference by up to 18×. This speed up in policy III execution times results in speeding-up the generation of the hardware actuation commands from 5 Hz (fp32) to 90 Hz (int8). Note that in this experiment we quantize both weights and activations to 8-bit integers; quantized models exhibit a larger loss in accuracy as activations are more difficult to quantize without some form of calibration to determine the range to quantize activation values to (Choi et al., 2018).
A deeper investigation shows that Policies II and III take more memory than the total RAM capacity of the RasPi-3b, causing numerous accesses to swap memory (refer to Appendix) during inference (which is extremely slow). Quantizing these policies allow them to fit into the RasPi’s RAM, eliminating accesses to swap and boosting performance by an order of magnitude. Figure 5 shows the memory usage while executing the quantized and unquantized version of Policy III, and shows how without quantization memory usage skyrockets above the total RAM capacity of the board.
In context of real-world deployment of an aerial (or any other type of) robot, a speedup in policy execution potentially translates to faster actuation commands to the aerial robot – which in turn implies faster and better responsiveness in a highly dynamic environment (Falanga et al., 2019). Our case study demonstrates how quantization can facilitate the deployment of a accurate policies trained using reinforcement learning onto a resource constrained platform.
6 CONCLUSION
We perform the first study of quantization effects on deep reinforcement learning using QuaRL, a software framework to benchmark and analyze the effects of quantization on various reinforcement learning tasks and algorithms. We analyze the performance in terms of rewards for post-training quantization and quantization aware training as applied to multiple reinforcement learning tasks and algorithms with the high level goal of reducing policies’ resource requirements for efficient training and deployment. We broadly demonstrate that reinforcement learning models may be quantized down to 8/16 bits without loss of performance. Also, we link quantization performance to the distribution of models’ weights, demonstrating that some reinforcement learning algorithms and tasks are more difficult to quantize due to their effect of widening the models’ weight distribution. Additionally, we show that quantization during training acts as a regularizer which improve exploration. Finally, we apply our results to optimize the training and inference of reinforcement learning models, demonstrating a 50% training speedup for Pong using mixed precision optimization and up to a 18x inference speedup on a RasPi by quantizing a navigation policy. In summary, our findings
indicate that there is much potential for the future of quantization of deep reinforcement learning policies.
A POST TRAINING QUANTIZATION RESULTS
Here we tabulate the post training quantization results listed in Table 2 into four separate tables for clarity. Each table corresponds to post training quantization results for a specific algorithm. Table 5 tabulates the post training quantization for A2C algorithm. Likewise, Table 6 tabulates the post training quantization results for DQN. Table 7 and Table 8 lists the post training quantization results for PPO and DDPG algorithms respectively.
B DQN HYPERPARAMETERS FOR ATARI
For all Atari games in the results section we use a standard 3 Layer Conv (128) + 128 FC. Hyperparameters are listed in Table 9. We use stable-baselines (Hill et al., 2018) for all the reinforcement learning experiments. We use Tensorflow version 1.14 as the machine learning backend.
C MIXED PRECISION HYPERPARAMETERS
In mixed precision training, we used three policies namely Policy A, Policy B and Policy C respectively. The policy architecture for these policies are tabulated in Table 10. For measuring the runtimes for fp32 adn fp16 training, we use the time Linux command for each run and add the usr and sys times to measure the runtimes for both mixed-precision training and fp32 training. The hyperparameters used for training DQN-Pong agent is listed in Table 9.
D QUANTIZED POLICY DEPLOYMENT
Here we describe the methodology used to train a point to point navigation policy in Air Learning and deploy it on an embedded compute platform such as Ras-Pi 3b+. Air Learning is an AI research platform that provides infrastructure components and tools to train a fully functional reinforcement learning policies for aerial robots. In simple environments like OpenAI gym, Atari the training and inference happens in the same environment without any randomization. In contrast to these environments, Air Learning allows us to randomize various environmental parameters such as such as arena size, number of obstacles, goal position etc.
In this study, we fix the arena size to 25 m × 25 m × 20 m. The maximum number of obstacles at anytime would be anywhere between one to five and is chosen randmonly on episode to episode basis. The position of these obstacles and end point (goal) are also changed every episode. We train the aerial robot to reach the end point using DQN algorithm. The input to the policy is sensor mounted on the drone along with IMU measurements. The output of the policy is one among the 25 actions with different velocity and yaw rates. The reward function we use in this study is defined based on the following equation:
r = 1000 ∗ α− 100 ∗ β −Dg −Dc ∗ δ − 1 (1)
Here, α is a binary variable whose value is ‘1’ if the agent reaches the goal else its value is ‘0’. β is a binary variable which is set to ‘1’ if the aerial robot collides with any obstacle or runs out of the maximum allocated steps for an episode.1 Otherwise, β is ’0’, effectively penalizing the agent for hitting an obstacle or not reaching the end point in time. Dg is the distance to the end point from the agent’s current location, motivating the agent to move closer to the goal.Dc is the distance correction which is applied to penalize the agent if it chooses actions which speed up the agent away from the goal. The distance correction term is defined as follows:
Dc = (Vmax − Vnow) ∗ tmax (2) Vmax is the maximum velocity possible for the agent which for DQN is fixed at 2.5 m/s. Vnow is the current velocity of the agent and tmax is the duration of the actuation.
We train three policies namely Policy I, Policy II, and Policy III. Each policy is learned through curriculum learning where we make the end goal farther away as the training progresses. We terminate the training once the agent has finished 1 Million steps. We evaluate the all the three policies in fp32 and quantized int8 data types for 100 evaluations in airlearning and report the success rate.
1We set the maximum allowed steps in an episode as 750. This is to make sure the agent finds the end-point (goal) within some finite amount of steps.
We also take these policies and characterize the system performance on a Ras-pi 3b platform. Ras-Pi 3b is a proxy for the compute platform available on the aerial robot. The hardware specification for Ras-Pi 3b is shown in Table 11.
We allocate a region of storage space as swap memory. It is the region of memory allocated in disk that is used when system memory is utilized fully by a process. In Ras-Pi 3b, the swap memory is allocated in Flash storage.
E POST-TRAINING QUANTIZATION SWEET SPOT
Figures 7 shows that there is a sweet spot for post-training quantization. Sometimes, quantizing to fewer bits outperforms higher precision quantization. Each plot was generated by applying posttraining quantization to the full precision baselines and evaluating over 10 runs. | 1. What are the main contributions of the paper on deep reinforcement learning?
2. How does the proposed approach impact memory cost and training and inference times?
3. Can you explain why reducing precision in neural networks in DRL algorithms from 32 bits to 16 or 8 bits doesn't have much effect on the quality of the learned policy?
4. Do you agree that the results presented in the paper are not novel or surprising?
5. What are some important details missing in the paper that make it hard to evaluate the validity of the results presented?
6. How can the presentation of the paper be improved? | Review | Review
This paper investigates the impact of using a reduced precision (i.e., quantization) in different deep reinforcement learning (DRL) algorithms. It shows that overall, reducing the precision of the neural network in DRL algorithms from 32 bits to 16 or 8 bits doesn't have much effect on the quality of the learned policy. It also shows how this quantization leads to a reduced memory cost and faster training and inference times.
I don't think this paper contributes with many novel results in the field, with most results being known or expected. The result that is interesting, in my opinion, is not properly explored. The paper is well-written but it is a bit repetitive. It seems to me that the first 3 pages could be compressed in 1, as the same information is introduced over and over again.
With respect to the results being known, quantization is known to succeed in supervised learning tasks. In a deep reinforcement learning algorithm, when you apply post-training quantization in a deep reinforcement learning algorithm, mainly when that algorithm uses a value function (e.g., A2C or DQN), the problem is reduced to a regression problem. It is no different than a supervised learning problem. One has the original network’s prediction and they need to match that prediction. The complexities introduced in the reinforcement learning problem (bootstrapping, exploration, stability) don’t exist anymore as they arise during training. Thus, it doesn’t seem to me that these results are novel or surprising. In a sense it is neat to see that eventual errors do not compound, but that’s it. If I were to write this paper I would make this set of experiments much shorter just as a sanity check. One thing that I feel is missing is a notion of the impact of the quantization not in the rewards accumulated but in the policy/value function. How often does the quantized agent take a different action than the original agent, for example? Does it happen often but only when it doesn’t matter, or is it rare?
The quantization during training is potentially interesting. It was not properly explored though. I wonder if the quantization during training has a regularization effect, which is known to improve agent’s performance in reinforcement learning (e.g., Cobbe et al., 2018, Farebrother et al., 2018). Does the agent generalize better when using a network with fewer bits of precision? How does this change impact training? These are all questions that could potentially make the results in this paper novel (i.e., quantization as a form of regularization), but as it is now, the results are not that surprising.
Importantly, there are important details missing in the paper that make it hard for me to evaluate the validity of the results presented. Are the results reported over multiple runs? What is the version of the Atari games used, is it the one with stochasticity? How much variance do we have if we replicate this process over different networks that perform well? These are questions I would like to see answered because they also inform us about the impact of the proposed idea. For example, if by repeating this experiment multiple times one observe a high variance, it might mean that different models might be impacted in different ways.
The results in the “real-world” (Pong is not real-world) are not that surprising as well. Basically they show that if one uses a network with lower precision training and inference are faster, which, again, is not surprising.
There’s also an important distinction in the results that is not discussed in the paper: DQN estimates a value function while methods such as PPO directly estimate a policy. The reason DQN might have a wider distribution is exactly because it is estimating a different objective. These are important details that should be acknowledged and discussed in the paper. In my opinion, for this paper be relevant, it should have a very thorough evaluation of these different dimensions of reinforcement learning algorithms, with explicit discussions about it. Variance, the impact of quantization during learning, the distinction between parametrizing policies versus value functions, etc.
Finally, there are some aspects of the presentation of this paper that could also be improved. Aside from typos, below are some other comments on the presentation.
- There’s no such thing as Atari environment, it is either Arcade Learning Environment (Bellemare et al., 2013) or Atari games.
- I’d introduce/explain quantization in the beginning of the second paragraph of the Introduction for those not familiar with the term.
- No references are provided for the environments used. You should refer to Bellemare et al.’s (2013) work as well as Brockman et al.’s (2016).
- Is it really necessary to explain Fp16 quantization as it is done now, with even a picture of two bytes? I’d expect most readers are familiar with how numbers are represented in a computer.
- The equation for Uniform Affine Quantization is pretty much the same as the one in the Section Quantization Aware Training. All these “repetitions”, or discussions that are common-knowledge give the impression that the paper is trying to fill all the pages without necessarily having enough content.
- The references are not standardized (e.g., sometimes names are shortened, sometimes they are not) and the paper “Efficient inference engine on compressed deep neural network” is cited twice.
References:
Marc G. Bellemare, Yavar Naddaf, Joel Veness, Michael Bowling: The Arcade Learning Environment: An Evaluation Platform for General Agents. J. Artif. Intell. Res. 47: 253-279 (2013)
Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang, Wojciech Zaremba: OpenAI Gym. CoRR abs/1606.01540 (2016)
Karl Cobbe, Oleg Klimov, Christopher Hesse, Taehoon Kim, John Schulman: Quantifying Generalization in Reinforcement Learning. CoRR abs/1812.02341 (2018)
Jesse Farebrother, Marlos C. Machado, Michael Bowling: Generalization and Regularization in DQN. CoRR abs/1810.00123 (2018)
------
>>> Update after rebuttal: I stand by my score after the rebuttal.
The rebuttal did acknowledge some points I made to me the paper took a gradient update towards the right direction. I don't think the paper is quite there yet though. It is repetitive, spending too much time with basic concepts, and it still ignores small details that matter (e.g., calling it Atari Arcade Learning). I strongly recommend the authors to follow my recommendations closely and then submit the paper again to a next conference. The discussion about generalization is potentially interesting, going beyond the regularization for exploration aspect. A better discussion about quantization during learning is also essential. The first three pages could probably be compressed by half. |
ICLR | Title
Quantized Reinforcement Learning (QuaRL)
Abstract
Recent work has shown that quantization can help reduce the memory, compute, and energy demands of deep neural networks without significantly harming their quality. However, whether these prior techniques, applied traditionally to imagebased models, work with the same efficacy to the sequential decision making process in reinforcement learning remains an unanswered question. To address this void, we conduct the first comprehensive empirical study that quantifies the effects of quantization on various deep reinforcement learning policies with the intent to reduce their computational resource demands. We apply techniques such as post-training quantization and quantization aware training to a spectrum of reinforcement learning tasks (such as Pong, Breakout, BeamRider and more) and training algorithms (such as PPO, A2C, DDPG, and DQN). Across this spectrum of tasks and learning algorithms, we show that policies can be quantized to 6-8 bits of precision without loss of accuracy. We also show that certain tasks and reinforcement learning algorithms yield policies that are more difficult to quantize due to their effect of widening the models’ distribution of weights and that quantization aware training consistently improves results over post-training quantization and oftentimes even over the full precision baseline. Additionally, we show that quantization aware training, like traditional regularizers, regularize models by increasing exploration during the training process. Finally, we demonstrate usefulness of quantization for reinforcement learning. We use half-precision training to train a Pong model 50% faster, and we deploy a quantized reinforcement learning based navigation policy to an embedded system, achieving an 18× speedup and a 4× reduction in memory usage over an unquantized policy.
N/A
Recent work has shown that quantization can help reduce the memory, compute, and energy demands of deep neural networks without significantly harming their quality. However, whether these prior techniques, applied traditionally to imagebased models, work with the same efficacy to the sequential decision making process in reinforcement learning remains an unanswered question. To address this void, we conduct the first comprehensive empirical study that quantifies the effects of quantization on various deep reinforcement learning policies with the intent to reduce their computational resource demands. We apply techniques such as post-training quantization and quantization aware training to a spectrum of reinforcement learning tasks (such as Pong, Breakout, BeamRider and more) and training algorithms (such as PPO, A2C, DDPG, and DQN). Across this spectrum of tasks and learning algorithms, we show that policies can be quantized to 6-8 bits of precision without loss of accuracy. We also show that certain tasks and reinforcement learning algorithms yield policies that are more difficult to quantize due to their effect of widening the models’ distribution of weights and that quantization aware training consistently improves results over post-training quantization and oftentimes even over the full precision baseline. Additionally, we show that quantization aware training, like traditional regularizers, regularize models by increasing exploration during the training process. Finally, we demonstrate usefulness of quantization for reinforcement learning. We use half-precision training to train a Pong model 50% faster, and we deploy a quantized reinforcement learning based navigation policy to an embedded system, achieving an 18× speedup and a 4× reduction in memory usage over an unquantized policy.
1 INTRODUCTION
Deep reinforcement learning has promise in many applications, ranging from game playing (Silver et al., 2016; 2017; Kempka et al., 2016) to robotics (Lillicrap et al., 2015; Zhang et al., 2015) to locomotion and transportation (Arulkumaran et al., 2017; Kendall et al., 2018). However, the training and deployment of reinforcement learning models remain challenging. Training is expensive because of their computationally expensive demands for repeatedly performing the forward and backward propagation in neural network training. Deploying deep reinforcement learning (DRL) models is prohibitively expensive, if not even impossible, due to the resource constraints on embedded computing systems typically used for applications, such as robotics and drone navigation.
Quantization can be helpful in substantially reducing the memory, compute, and energy usage of deep learning models without significantly harming their quality (Han et al., 2015; Zhou et al., 2016; Han et al., 2016). However, it is unknown whether the same techniques carry over to reinforcement learning. Unlike models in supervised learning, the quality of a reinforcement learning policy depends on how effective it is in sequential decision making. Specifically, an agent’s current input and decision heavily affect its future state and future actions; it is unclear how quantization affects the long-term decision making capability of reinforcement learning policies. Also, there are many different algorithms to train a reinforcement learning policy. Algorithms like actor-critic methods (A2C), deep-q networks (DQN), proximal policy optimization (PPO) and deep deterministic policy gradients (DDPG) are significantly different in their optimization goals and implementation details, and it is unclear whether quantization would be similarly effective across these algorithms. Finally, reinforcement learning policies are trained and applied to a wide range of environments, and it is unclear how quantization affects performance in tasks of differing complexity.
Here, we aim to understand quantization effects on deep reinforcement learning policies. We comprehensively benchmark the effects of quantization on policies trained by various reinforcement learning algorithms on different tasks, conducting in excess of 350 experiments to present representative and conclusive analysis. We perform experiments over 3 major axes: (1) environments (Atari Arcade, PyBullet, OpenAI Gym), (2) reinforcement learning training algorithms (Deep-Q Networks, Advantage Actor-Critic, Deep Deterministic Policy Gradients, Proximal Policy Optimization) and (3) quantization methods (post-training quantization, quantization aware training).
We show that quantization induces a regularization effect by increasing exploration during training. This motivates the use of quantization aware training, which we show demonstrates improved performance over post-training quantization and oftentimes even over the full precision baseline. Additionally, We show that deep reinforcement learning models can be quantized to 6-8 bits of precision without loss in quality. Furthermore, we analyze how each axis affects the final performance of the quantized model to develop insights into how to achieve better model quantization. Our results show that some tasks and training algorithms yield models that are more difficult to apply post-training quantization as they widen the spread of the models’ weight distribution, yielding higher quantization error. To demonstrate the usefulness of quantization for deep reinforcement learning, we 1) use half precision ops to train a Pong model 50% faster than full precision training and 2) deploy a quantized reinforcement learning based navigation policy onto an embedded system and achieve an 18× speedup and a 4× reduction in memory usage over an unquantized policy.
2 RELATED WORK
Reducing neural network resource requirements is an active research topic. Techniques include quantization (Han et al., 2015; 2016; Zhu et al., 2016; Jacob et al., 2018; Lin et al., 2019; Polino et al., 2018; Sakr & Shanbhag, 2018), deep compression (Han et al., 2016), knowledge distillation (Hinton et al., 2015; Chen et al., 2017), sparsification (Han et al., 2016; Alford et al., 2018; Park et al., 2016; Louizos et al., 2018b; Bellec et al., 2017) and pruning (Alford et al., 2018; Molchanov et al., 2016; Li et al., 2016). These methods are employed because they compress to reduce storage and memory requirements as well as enable fast and efficient inference and training with specialized operations. We provide background for these motivations and describe the specific techniques that fall under these categories and motivate why quantization for reinforcement learning needs study.
Compression for Memory and Storage: Techniques such as quantization, pruning, sparsification, and distillation reduce the amount of storage and memory required by deep neural networks. These techniques are motivated by the need to train and deploy neural networks on memoryconstrained environments (e.g., IoT or mobile). Broadly, quantization reduces the precision of network weights (Han et al., 2015; 2016; Zhu et al., 2016), pruning removes various layers and filters of a network (Alford et al., 2018; Molchanov et al., 2016), sparsification zeros out selective network values (Molchanov et al., 2016; Alford et al., 2018) and distillation compresses an ensemble of networks into one (Hinton et al., 2015; Chen et al., 2017). Various algorithms combining these core techniques have been proposed. For example, Deep Compression (Han et al., 2015) demonstrated that a combination of weight-sharing, pruning, and quantization might reduce storage requirements by 35-49x. Importantly, these methods achieve high compression rates at small losses in accuracy by exploiting the redundancy that is inherent within the neural networks.
Fast and Efficient Inference/Training: Methods like quantization, pruning, and sparsification may also be employed to improve the runtime of network inference and training as well as their energy consumption. Quantization reduces the precision of network weights and allows more efficient quantized operations to be used during training and deployment, for example, a ”binary” GEMM (general matrix multiply) operation (Rastegari et al., 2016; Courbariaux et al., 2016). Pruning speeds up neural networks by removing layers or filters to reduce the overall amount of computation necessary to make predictions (Molchanov et al., 2016). Finally, Sparsification zeros out network weights and enables faster computation via specialized primitives like block-sparse matrix multiply (Ren et al., 2018). These techniques not only speed up neural networks but decrease energy consumption by requiring fewer floating-point operations.
Quantization for Reinforcement Learning: Prior work in quantization focuses mostly on quantizing image / supervised models. However, there are several key differences between these models and reinforcement learning policies: an agent’s current input and decision affects its future state and actions, there are many complex algorithms (e.g: DQN, PPO, A2C, DDPG) for training, and there are many diverse tasks. To the best of our knowledge, this is the first work to apply and analyze the performance of quantization across a broad of reinforcement learning tasks and training algorithms.
3 QUANTIZED REINFORCEMENT LEARNING (QUARL)
We develop QuaRL, an open-source software framework that allows us to systematically apply traditional quantization methods to a broad spectrum of deep reinforcement learning models. We use the QuaRL framework to 1) evaluate how effective quantization is at compressing reinforcement learning policies, 2) analyze how quantization affects/is affected by the various environments and training algorithms in reinforcement learning and 3) establish a standard on the performance of quantization techniques across various training algorithms and environments.
Environments: We evaluate quantized models on three different types of environments: OpenAI gym (Brockman et al., 2016), Atari Arcade Learning (Bellemare et al., 2012), and PyBullet (which is an open-source implementation of the MuJoCo). These environments consist of a variety of tasks, including CartPole, MountainCar, LunarLandar, Atari Games, Humanoid, etc. The complete list of environments used in the QuaRL framework is listed in Table 1. Evaluations across this spectrum of different tasks provide a robust benchmark on the performance of quantization applied to different reinforcement learning tasks.
Training Algorithms: We study quantization on four popular reinforcement learning algorithms, namely Advantage Actor-Critic (A2C) (Mnih et al., 2016), Deep Q-Network (DQN) (Mnih et al., 2013), Deep Deterministic Policy Gradients (DDPG) (Lillicrap et al., 2015) and Proximal Policy Optimization (PPO) (Schulman et al., 2017). Evaluating these standard reinforcement learning algorithms that are well established in the community allows us to explore whether quantization is similarly effective across different reinforcement learning algorithms.
Quantization Methods: We apply standard quantization techniques to deep reinforcement learning models. Our main approaches are post-training quantization and quantization aware training. We apply these methods to models trained in different environments by different reinforcement learning algorithms to broadly understand their performance. We describe how these methods are applied in the context of reinforcement learning below.
3.1 POST-TRAINING QUANTIZATION
Post-training quantization takes a trained full precision model (32-bit floating point) and quantizes its weights to lower precision values. We quantize weights down to fp16 (16-bit floating point) and int8 (8-bit integer) values. fp16 quantization is based on IEEE-754 floating point rounding and int8 quantization uses uniform affine quantization.
Fp16 Quantization: Fp16 quantization involves taking full precision (32-bit) values and mapping them to the nearest representable 16-bit float. The IEEE-754 standard specifies 16-bit floats with the format shown below. Bits are grouped to specify the value of the sign (S), fraction (F ) and exponent (E) which are then combined with the following formula to yield the effective value of the float:
Sign Exponent (5 bits) Fraction (10 bits)
Vfp16 = (−1)S × (1 + F
210 )× 2E−15
In subsequent sections, we refer to float16 quantization using the following notation:
Qfp16(W ) = roundfp16(W )
Uniform Affine Quantization: Uniform affine quantization (TensorFlow, 2018b) is applied to a full precision weight matrix and is performed by 1) calculating the minimum and maximum values of the matrix and 2) dividing this range equally into 2n representable values (where n is the number of bits being quantized to). As each representable value is equally spaced across this range, the quantized value can be represented by an integer. More specifically, quantization from full precision to n-bit integers is given by:
Qn(W ) =
⌊ W
δ
⌋ + z where δ =
|min(W, 0)|+ |max(W, 0)| 2n
, z = ⌊ −min(W, 0)
δ ⌋ Note that δ is the gap between representable numbers and z is an offset so that 0 is exactly representable. Further note that we usemin(W, 0) andmax(W, 0) to ensure that 0 is always represented. To dequantize we perform:
D(Wq, δ, z) = δ(Wq − z)
In the context of QuaRL, int8 and fp16 quantization are applied after training a full precision model on an environment, as per Algorithm 1. In post training quantization, uniform quantization is applied to each fully connected layer of the model (per-tensor quantization) and is applied to each channel of convolution weights (per-axis quantization); activations are not quantized. We use post-training quantization to quantize to fp16 and int8 values.
Algorithm 1: Post-Training Quantization for Reinforcement Learning Input: T : task or environment Input: L : reinforcement learning algorithm Input: A : model architecture Input: n : quantize bits (8 or 16) Output: Reward
1 M = Train(T , L, A)
2 Q =
{ Qint8 n = 8
Qfp16 n = 16 3 return Eval(Q(M))
Algorithm 2: Quantization Aware Training for Reinforcement Learning Output: Reward Input: T : task or environment Input: L : reinforcement learning algorithm Input: n : quantize bits Input: A : model architecture Input: Qd : quantization delay
1 Aq = InsertAfterWeightsAndActivations(Qtrainn ) 2 M , TensorMinMaxes =
TrainNoQuantMonitorWeightsActivationsRanges(T , L, Aq , Qd) 3 M = TrainWithQuantization(T , L, M , TensorMinMaxes, Qtrainn ) 4 return Eval(M , Qtrainn , TensorMinMaxes)
3.2 QUANTIZATION AWARE TRAINING
Quantization aware training involves retraining the reinforcement learning policies with weights and activations uniformly quantized to n bit values. Importantly, weights are maintained in full fp32 precision except that they are passed through the uniform quantization function before being used in the forward pass. Because of this, the technique is also known as “fake quantization” (TensorFlow, 2018b). Additionally, to improve training there is an additional parameter, quantization delay (TensorFlow, 2018a), which specifies the number of full precision training steps before enabling quantization. When the number of steps is less than the quantization delay parameter, the minimum and maximum values of weights and activations are actively monitored. Afterwards, the previously
captured minimum and maximum values are used to quantize the tensors (these values remain static from then on). Specifically:
Qtrainn (W,Vmin, Vmax) =
⌊ W
δ
⌋ + z where δ =
|Vmin|+ |Vmax| 2n , z = ⌊ −Vmin δ ⌋ Where Vmin and Vmax are the monitored minimum and maximum values of the tensor (expanding Vmin and Vmax to include 0 if necessary). Intuitively, the expectation is that the training process eventually learns to account for the quantization error, yielding a higher performing quantized model. Note that uniform quantization is applied to fully connected weights in the model (per-tensor quantization) and to each channel for convolution weights (per-axis quantization). n bit quantization is applied to each layer’s weights and activations:
xk+1 = A(Q train n (Wk, Vmin, Vmax)ak + b) where A is the activation function
ak+1 = Q train n (xk+1, Vmin, Vmax)
During backward propagation, the gradient is passed through the quantization function unchanged (also known as the straight-through estimator (Hinton, 2012)), and the full precision weight matrix W is optimized as follows:
∆WQ train n (W,Vmin, Vmax) = I
In context of the QuaRL framework, the policy neural network is retrained from scratch after inserting the quantization functions between weights and activations (all else being equal). At evaluation full precision weights are passed through the uniform affine quantizer to simulate quantization error during inference. Algorithm 2 describes how quantization aware training is applied in QuaRL.
4 RESULTS
In this section, we first show that quantization has regularization effect on reinforcement learning algorithms and can boost exploration. Secondly, We show that reinforcement learning algorithms can be quantized safely without significantly affecting the rewards. To that end, we perform evaluations across the three principal axes of QuaRL: environments, training algorithms, and quantization methods.For post-training quantization, we evaluate each policy for 100 episodes and average the rewards. For Quantization Aware Training (QAT), we train atleast three policies and report the mean rewards over hundred evaluations. Table 1 lists the space of the evaluations explored.
Quantization as Regularization: To further establish the effects of quantization during training, we compare quantization-aware training with traditional regularization techniques (specifically layer-norm (Ba et al., 2016; Kukacka et al., 2017)) and measure the amount of exploration these techniques induce. It has been show in previous literature (Farebrother et al., 2018; Cobbe et al., 2018) that regularization actively helps reinforcement learning training generalize better; here we further reinforce this notion and additionally establish a relationship between quantization, generalization and exploration. We use the variance in action distribution produced by the model as a proxy for exploration: intuitively, since the policy samples from this distribution when performing an action, a policy that produces an action distribution with high variance is less likely to explore different states. Conversely, a low variance action distribution indicates high exploration as the policy is more likely to take a different action than the highest scoring one.
We measure the variance in action distribution produced by differently trained models (QAT2, QAT-4, QAT-6, QAT-8, with layer norm and full precision) at different stages of the training process. We collect model rewards and the action distribution variance over several rollouts with deterministic action selection (model performs the highest scoring action). Importantly, we make sure to use deterministic action selection to ensure that the states reached are similar to the the distribution seen by the model during training. To separate signal from noise, we furthermore smooth the action variances with a smoothing factor of .95 for both rewards and action variances.
Figure 4 shows the variance in action distribution produced by the models at different stages of training. Training with higher quantization levels (e.g: 2 bit vs 4 bit training), like layer norm regularization, induces lower action distribution variance and hence indicates more exploration. Furthermore, figure 4 reward plot shows that despite lower action variance, models trained with quantization achieve a reward similar to the full precision baseline, which indicates that higher exploration is facilitated by quantization and not by a lack of training. Note that quantization is turned on at 5,000,000 steps and we see its effects on the action distribution variance shortly after this point. In summary, data shows that training with quantization, like traditional regularization, in part regularizes reinforcement learning training by facilitating exploration during the training process.
Effectiveness of Quantization: To evaluate the overall effectiveness of quantization for deep reinforcement learning, we apply post-training quantization and quantization aware learning to a spectrum of tasks and record their performance. We present the reward results for post-training quantization in Table 2. We also compute the percentage error of the performance of the quantized policy relative to that of their corresponding full precision baselines (Efp16 and Eint8). Additionally, we report the mean of the errors across tasks for each of the training algorithms.
The absolute mean of 8-bit and 16-bit relative errors ranges between 2% and 5% (with the exception of DQN), which indicates that models may be quantized to 8/16 bit precision without much loss in quality. Interestingly, the overall performance difference between the 8-bit and 16-bit post-training quantization is minimal (with the exception of the DQN algorithm, for reasons we explain in Section 4). We believe this is because the policies weight distribution is narrow enough that 8 bits is able to capture the distribution of weights without much error. In a few cases, post-training quantization yields better scores than the full precision policy. We believe that quantization injected an amount of noise that was small enough to maintain a good policy and large enough to regularize model behavior; this supports some of the results seen by Louizos et al. (2018a); Bishop (1995); Hirose et al. (2018); see appendix for plots showing that there is a sweet spot for post-training quantization.
For quantization aware training, we train the policy with fake-quantization operations while maintaining the same model and hyperparameters (see Appendix). Figure 2 shows the results of quantization aware training on multiple environments and training algorithms to compress the policies down from 8-bits to 2-bits. Generally, the performance relative to the full precision baseline is maintained until 5/6-bit quantization, after which there is a drop in performance. Broadly, at 8-bits, we see no degradation in performance. From the data, we see that quantization aware training achieves higher rewards than post-training quantization and also sometimes outperforms the full precision baseline.
Effect of Environment on Quantization Quality: To analyze the task’s effect on quantization quality we plot the distribution of weights of full precision models trained in three environments (Breakout, Beamrider and Pong) and their error after applying 8-bit post-training quantization on them. Each model uses the same network architecture, is trained using the same algorithm (DQN) with the same hyperparameters (see Appendix).
Figure 3 shows that the task with the highest error (Breakout) has the widest weight distribution, the task with the second-highest error (BeamRider) has a narrower weight distribution, and the task with the lowest error (Pong) has the narrowest distribution. With an affine quantizer, quantizing a narrower distribution yields less error because the distribution can be captured at a fine granularity; conversely, a wider distribution requires larger gaps between representable numbers and thus increases quantization error. The trends indicate the environment affects models’ weight distribution spread which affects quantization performance: specifically, environ-
ments that yield a wider distribution of model weights are more difficult to apply quantization to. This observation suggests that regularizing the training process may yield better performance.
Algorithm Environment fp32 Reward Eint8 Efp16 DQN Breakout 214 63.55% -1.40% PPO Breakout 400 8.00% 0.00% A2C Breakout 379 7.65% 2.11%
Table 3: Rewards for DQN, PPO, and A2C.
-2.0 -1.5 -1.0 -0.5 0.0 0.5 1.0
weight
Fr eq
ue nc
y
105 103 101 Min Weight: -2.21 Max Weight: 1.31
Min Weight: -1.02 Max Weight: 0.58
Min Weight: -0.79 Max Weight: 0.72
DQN
PPO
A2C
105 103 101
105 103 101
Fr eq
ue nc y Fr eq ue nc y
-2.0 -1.5 -1.0 -0.5 0.0 0.5 1.0
-2.0 -1.5 -1.0 -0.5 0.0 0.5 1.0
Figure 4: Weight distributions for the policies trained using DQN, PPO and A2C. DQN policy weights are more spread out and more difficult to cover effectively by 8-bit quantization (yellow lines). This explains the higher quantization error for DQN in Table 3. A negative error indicates that the quantized model outperformed the full precision baseline.
Effect of Training Algorithm on Quantization Quality: To determine the effects of the reinforcement learning training algorithm on the performance of quantized models, we compare the performance of post-training quantized models trained by various algorithms. Table 3 shows the error of different reinforcement learning algorithms and their corresponding 8-bit post-training quantization error for the Atari Breakout game. Results indicate that the A2C training algorithm is most conducive to int8 post-training quantization, followed by PPO2 and DQN. Interestingly, we see a sharp performance drop compared to the corresponding full precision baseline when applying 8-bit post-training quantization to models trained by DQN. At 8 bits, models trained by PPO2 and A2C have relative errors of 8% and 7.65%, whereas the model trained by DQN has an error of ∼64%. To understand this phenomenon, we plot the distribution of model weights trained by each algorithm, shown in Figure 4. The plot shows that the weight distribution of the model trained by DQN is significantly wider than those trained by PPO2 and A2C. A wider distribution of weights indicates a higher quantization error, which explains the large error of the 8-bit quantized DQN model. This also explains why using more bits (fp16) is more effective for the model trained by DQN (which reduces error relative to the full precision baseline from ∼64% down to ∼-1.4%). These results signify that the choice of RL algorithms (on-policy vs off-policy) have different objective functions and hence can result in a completely different weight distribution. A wider distribution has more pronounced impact on the quantization error.
5 CASE STUDIES
To show the usefulness of our results, we use quantization to optimize the training and deployment of reinforcement learning policies. We 1) train a pong model 1.5× faster by using mixed precision optimization and 2) deploy a quantized robot navigation model onto a resource constrained embedded system (RasPi-3b), demonstrating 4× reduction in memory and an 18× speedup in inference. Faster training time means running more experiments for the same time. Achieving speedup on resource-constrained devices enables deployment of the policies on real robots.
Mixed/Half-Precision Training: Motivated by that reinforcement learning training is robust to quantization error, we train three policies of increasing model complexity (Policy A, Policy B, and Policy C) using mixed precision training and compare its performance to that of full precision training (see Appendix for details). In mixed precision training, the policy weights, activations, and gradients are represented in fp16. A master copy of the weights are stored in full precision (fp32) and updates are made to it during backward pass (Micikevicius et al., 2017). We measure the runtime and convergence rate of both full precision and mixed precision training (see Appendix).
Algorithm NetworkParameter fp32
Runtime (min)
MP Runtime
(min) Speedup
DQN-Pong Policy A 127 156 0.87× Policy B 179 172 1.04× Policy C 391 242 1.61×
Table 4: Mixed precision training for reinforcement learning.
0 200k 400k 600k 800k 1M
20 fD 10 F 0 fD -10 fD -20
Policy A Policy B Policy C
Mixed Precision Fp32 Only
step
Re w
ar d
20 fD 10 F 0 fD -10 fD -20
20 fD 10 F 0 fD -10 fD -20 Mixed Precision Fp32 Only
Mixed Precision Fp32 Only
0 200k 400k 600k 800k 1M step 0 200k 400k 600k 800k 1M step
Figure 5: Mixed precision v/s fp32 training rewards.
Figure 5 shows that all three policies converge under full precision and mixed precision training. Interestingly, for Policy B, training with mixed precision yields faster convergence; we believe that
some amount of quantization error speeds up the training process. Table 5 shows the computational speedup to the training loop by using mixed precision training. While using mixed precision training on smaller networks (Policy A) may slow down training iterations (as overhead of doing fp32 to fp16 conversions outweigh the speedup of low precision ops), larger networks (Policy C) show up to a 60% speedup. Generally, our results show that mixed precision may speed up the training process by up to 1.6× without harming convergence. Quantized Policy for Deployment: To show the benefits of quantization in deploying of reinforcement learning policies, we train multiple point-to-point navigation models (Policy I, II, and III) for aerial robots using Air Learning (Krishnan et al., 2019) and deploy them onto a RasPi-3b, a cost effective, general-purpose embedded processor. RasPi-3b is used as proxy for the compute platform for the aerial robot. Other platforms on aerial robots have similar characteristics. For each of these policies, we report the accuracies and inference speedups attained by the int8 and fp32 policies.
Table 5 shows the accuracies and inference speedups attained for each corresponding quantized policy. We see that quantizing smaller policies (Policy I) yield moderate inference speedups (1.18× for Policy I), while quantizing larger models (Policies II, III) can speed up inference by up to 18×. This speed up in policy III execution times results in speeding-up the generation of the hardware actuation commands from 5 Hz (fp32) to 90 Hz (int8). Note that in this experiment we quantize both weights and activations to 8-bit integers; quantized models exhibit a larger loss in accuracy as activations are more difficult to quantize without some form of calibration to determine the range to quantize activation values to (Choi et al., 2018).
A deeper investigation shows that Policies II and III take more memory than the total RAM capacity of the RasPi-3b, causing numerous accesses to swap memory (refer to Appendix) during inference (which is extremely slow). Quantizing these policies allow them to fit into the RasPi’s RAM, eliminating accesses to swap and boosting performance by an order of magnitude. Figure 5 shows the memory usage while executing the quantized and unquantized version of Policy III, and shows how without quantization memory usage skyrockets above the total RAM capacity of the board.
In context of real-world deployment of an aerial (or any other type of) robot, a speedup in policy execution potentially translates to faster actuation commands to the aerial robot – which in turn implies faster and better responsiveness in a highly dynamic environment (Falanga et al., 2019). Our case study demonstrates how quantization can facilitate the deployment of a accurate policies trained using reinforcement learning onto a resource constrained platform.
6 CONCLUSION
We perform the first study of quantization effects on deep reinforcement learning using QuaRL, a software framework to benchmark and analyze the effects of quantization on various reinforcement learning tasks and algorithms. We analyze the performance in terms of rewards for post-training quantization and quantization aware training as applied to multiple reinforcement learning tasks and algorithms with the high level goal of reducing policies’ resource requirements for efficient training and deployment. We broadly demonstrate that reinforcement learning models may be quantized down to 8/16 bits without loss of performance. Also, we link quantization performance to the distribution of models’ weights, demonstrating that some reinforcement learning algorithms and tasks are more difficult to quantize due to their effect of widening the models’ weight distribution. Additionally, we show that quantization during training acts as a regularizer which improve exploration. Finally, we apply our results to optimize the training and inference of reinforcement learning models, demonstrating a 50% training speedup for Pong using mixed precision optimization and up to a 18x inference speedup on a RasPi by quantizing a navigation policy. In summary, our findings
indicate that there is much potential for the future of quantization of deep reinforcement learning policies.
A POST TRAINING QUANTIZATION RESULTS
Here we tabulate the post training quantization results listed in Table 2 into four separate tables for clarity. Each table corresponds to post training quantization results for a specific algorithm. Table 5 tabulates the post training quantization for A2C algorithm. Likewise, Table 6 tabulates the post training quantization results for DQN. Table 7 and Table 8 lists the post training quantization results for PPO and DDPG algorithms respectively.
B DQN HYPERPARAMETERS FOR ATARI
For all Atari games in the results section we use a standard 3 Layer Conv (128) + 128 FC. Hyperparameters are listed in Table 9. We use stable-baselines (Hill et al., 2018) for all the reinforcement learning experiments. We use Tensorflow version 1.14 as the machine learning backend.
C MIXED PRECISION HYPERPARAMETERS
In mixed precision training, we used three policies namely Policy A, Policy B and Policy C respectively. The policy architecture for these policies are tabulated in Table 10. For measuring the runtimes for fp32 adn fp16 training, we use the time Linux command for each run and add the usr and sys times to measure the runtimes for both mixed-precision training and fp32 training. The hyperparameters used for training DQN-Pong agent is listed in Table 9.
D QUANTIZED POLICY DEPLOYMENT
Here we describe the methodology used to train a point to point navigation policy in Air Learning and deploy it on an embedded compute platform such as Ras-Pi 3b+. Air Learning is an AI research platform that provides infrastructure components and tools to train a fully functional reinforcement learning policies for aerial robots. In simple environments like OpenAI gym, Atari the training and inference happens in the same environment without any randomization. In contrast to these environments, Air Learning allows us to randomize various environmental parameters such as such as arena size, number of obstacles, goal position etc.
In this study, we fix the arena size to 25 m × 25 m × 20 m. The maximum number of obstacles at anytime would be anywhere between one to five and is chosen randmonly on episode to episode basis. The position of these obstacles and end point (goal) are also changed every episode. We train the aerial robot to reach the end point using DQN algorithm. The input to the policy is sensor mounted on the drone along with IMU measurements. The output of the policy is one among the 25 actions with different velocity and yaw rates. The reward function we use in this study is defined based on the following equation:
r = 1000 ∗ α− 100 ∗ β −Dg −Dc ∗ δ − 1 (1)
Here, α is a binary variable whose value is ‘1’ if the agent reaches the goal else its value is ‘0’. β is a binary variable which is set to ‘1’ if the aerial robot collides with any obstacle or runs out of the maximum allocated steps for an episode.1 Otherwise, β is ’0’, effectively penalizing the agent for hitting an obstacle or not reaching the end point in time. Dg is the distance to the end point from the agent’s current location, motivating the agent to move closer to the goal.Dc is the distance correction which is applied to penalize the agent if it chooses actions which speed up the agent away from the goal. The distance correction term is defined as follows:
Dc = (Vmax − Vnow) ∗ tmax (2) Vmax is the maximum velocity possible for the agent which for DQN is fixed at 2.5 m/s. Vnow is the current velocity of the agent and tmax is the duration of the actuation.
We train three policies namely Policy I, Policy II, and Policy III. Each policy is learned through curriculum learning where we make the end goal farther away as the training progresses. We terminate the training once the agent has finished 1 Million steps. We evaluate the all the three policies in fp32 and quantized int8 data types for 100 evaluations in airlearning and report the success rate.
1We set the maximum allowed steps in an episode as 750. This is to make sure the agent finds the end-point (goal) within some finite amount of steps.
We also take these policies and characterize the system performance on a Ras-pi 3b platform. Ras-Pi 3b is a proxy for the compute platform available on the aerial robot. The hardware specification for Ras-Pi 3b is shown in Table 11.
We allocate a region of storage space as swap memory. It is the region of memory allocated in disk that is used when system memory is utilized fully by a process. In Ras-Pi 3b, the swap memory is allocated in Flash storage.
E POST-TRAINING QUANTIZATION SWEET SPOT
Figures 7 shows that there is a sweet spot for post-training quantization. Sometimes, quantizing to fewer bits outperforms higher precision quantization. Each plot was generated by applying posttraining quantization to the full precision baselines and evaluating over 10 runs. | 1. What is the focus of the paper regarding deep reinforcement learning (DRL) models?
2. What are the strengths and weaknesses of the proposed approach in terms of resource usage and model quality?
3. How does the reviewer assess the significance and novelty of the research compared to prior works in supervised learning?
4. What are the limitations of the study regarding its applicability to real environments and potential secondary effects on the learning process?
5. Are there any questions about the experimental setup or details that the reviewer would like to know more about? | Review | Review
Training and deployment of DRL models is expensive. Quantization has proven useful in supervised learning, however it is yet to be tested thoroughly in DRL. This paper investigates whether quantization can be applied in DRL towards better resource usage (compute, energy) without harming the model quality. Both quantization-aware training (via fake quantization) and post-training quantization is investigated. The work demonstrates that policies can be reduced to 6-8 bits without quality loss. The paper indicates that quantization can indeed lower resource consumption without quality decline in realistic DRL tasks and for various algorithms.
The researchers propose a benchmark called QUARL that allows them to evaluate the effectiveness of quantization as well as the impact of quantization across a set of established DRL algorithms (e.g., DQN, DDPG, PPO) and environments (e.g., OpenAI Gym, ALE). Quantizations tested: fp32 -> fp16, int8, uniform affine.
The idea is simple and carries over from (image-based) supervised learning. The experiments are exhaustive and have to the best of my knowledge not yet been conducted. The conclusions indicate the advantage of quantization, however it is unclear how these results would generalize to real environments (the environments used are after all still simple benchmarks, e.g., half-cheetah or pong). The results are also not entirely surprising or impactful: how is quantization impacting reinforcement learning in a different way than supervised learning? E.g., DQN is supervised learning of a Q-value function against a target. What secondary effects does quantization have on the learning procedure: e.g., does it boost exploration behavior or does it regularize training? We also know that some of these tasks can be solved by extremely small models (https://arxiv.org/abs/1806.01363), while the models used in this work are significantly larger: is quantization working simply because the network capacity is large enough to allow it? These could be investigated in more detail. Furthermore, I'm also missing some experimental setup details: e.g., how many seeds were used for all of the experiments (which is known to greatly affect the results on the benchmarks used in this paper)? |
ICLR | Title
Quantized Reinforcement Learning (QuaRL)
Abstract
Recent work has shown that quantization can help reduce the memory, compute, and energy demands of deep neural networks without significantly harming their quality. However, whether these prior techniques, applied traditionally to imagebased models, work with the same efficacy to the sequential decision making process in reinforcement learning remains an unanswered question. To address this void, we conduct the first comprehensive empirical study that quantifies the effects of quantization on various deep reinforcement learning policies with the intent to reduce their computational resource demands. We apply techniques such as post-training quantization and quantization aware training to a spectrum of reinforcement learning tasks (such as Pong, Breakout, BeamRider and more) and training algorithms (such as PPO, A2C, DDPG, and DQN). Across this spectrum of tasks and learning algorithms, we show that policies can be quantized to 6-8 bits of precision without loss of accuracy. We also show that certain tasks and reinforcement learning algorithms yield policies that are more difficult to quantize due to their effect of widening the models’ distribution of weights and that quantization aware training consistently improves results over post-training quantization and oftentimes even over the full precision baseline. Additionally, we show that quantization aware training, like traditional regularizers, regularize models by increasing exploration during the training process. Finally, we demonstrate usefulness of quantization for reinforcement learning. We use half-precision training to train a Pong model 50% faster, and we deploy a quantized reinforcement learning based navigation policy to an embedded system, achieving an 18× speedup and a 4× reduction in memory usage over an unquantized policy.
N/A
Recent work has shown that quantization can help reduce the memory, compute, and energy demands of deep neural networks without significantly harming their quality. However, whether these prior techniques, applied traditionally to imagebased models, work with the same efficacy to the sequential decision making process in reinforcement learning remains an unanswered question. To address this void, we conduct the first comprehensive empirical study that quantifies the effects of quantization on various deep reinforcement learning policies with the intent to reduce their computational resource demands. We apply techniques such as post-training quantization and quantization aware training to a spectrum of reinforcement learning tasks (such as Pong, Breakout, BeamRider and more) and training algorithms (such as PPO, A2C, DDPG, and DQN). Across this spectrum of tasks and learning algorithms, we show that policies can be quantized to 6-8 bits of precision without loss of accuracy. We also show that certain tasks and reinforcement learning algorithms yield policies that are more difficult to quantize due to their effect of widening the models’ distribution of weights and that quantization aware training consistently improves results over post-training quantization and oftentimes even over the full precision baseline. Additionally, we show that quantization aware training, like traditional regularizers, regularize models by increasing exploration during the training process. Finally, we demonstrate usefulness of quantization for reinforcement learning. We use half-precision training to train a Pong model 50% faster, and we deploy a quantized reinforcement learning based navigation policy to an embedded system, achieving an 18× speedup and a 4× reduction in memory usage over an unquantized policy.
1 INTRODUCTION
Deep reinforcement learning has promise in many applications, ranging from game playing (Silver et al., 2016; 2017; Kempka et al., 2016) to robotics (Lillicrap et al., 2015; Zhang et al., 2015) to locomotion and transportation (Arulkumaran et al., 2017; Kendall et al., 2018). However, the training and deployment of reinforcement learning models remain challenging. Training is expensive because of their computationally expensive demands for repeatedly performing the forward and backward propagation in neural network training. Deploying deep reinforcement learning (DRL) models is prohibitively expensive, if not even impossible, due to the resource constraints on embedded computing systems typically used for applications, such as robotics and drone navigation.
Quantization can be helpful in substantially reducing the memory, compute, and energy usage of deep learning models without significantly harming their quality (Han et al., 2015; Zhou et al., 2016; Han et al., 2016). However, it is unknown whether the same techniques carry over to reinforcement learning. Unlike models in supervised learning, the quality of a reinforcement learning policy depends on how effective it is in sequential decision making. Specifically, an agent’s current input and decision heavily affect its future state and future actions; it is unclear how quantization affects the long-term decision making capability of reinforcement learning policies. Also, there are many different algorithms to train a reinforcement learning policy. Algorithms like actor-critic methods (A2C), deep-q networks (DQN), proximal policy optimization (PPO) and deep deterministic policy gradients (DDPG) are significantly different in their optimization goals and implementation details, and it is unclear whether quantization would be similarly effective across these algorithms. Finally, reinforcement learning policies are trained and applied to a wide range of environments, and it is unclear how quantization affects performance in tasks of differing complexity.
Here, we aim to understand quantization effects on deep reinforcement learning policies. We comprehensively benchmark the effects of quantization on policies trained by various reinforcement learning algorithms on different tasks, conducting in excess of 350 experiments to present representative and conclusive analysis. We perform experiments over 3 major axes: (1) environments (Atari Arcade, PyBullet, OpenAI Gym), (2) reinforcement learning training algorithms (Deep-Q Networks, Advantage Actor-Critic, Deep Deterministic Policy Gradients, Proximal Policy Optimization) and (3) quantization methods (post-training quantization, quantization aware training).
We show that quantization induces a regularization effect by increasing exploration during training. This motivates the use of quantization aware training, which we show demonstrates improved performance over post-training quantization and oftentimes even over the full precision baseline. Additionally, We show that deep reinforcement learning models can be quantized to 6-8 bits of precision without loss in quality. Furthermore, we analyze how each axis affects the final performance of the quantized model to develop insights into how to achieve better model quantization. Our results show that some tasks and training algorithms yield models that are more difficult to apply post-training quantization as they widen the spread of the models’ weight distribution, yielding higher quantization error. To demonstrate the usefulness of quantization for deep reinforcement learning, we 1) use half precision ops to train a Pong model 50% faster than full precision training and 2) deploy a quantized reinforcement learning based navigation policy onto an embedded system and achieve an 18× speedup and a 4× reduction in memory usage over an unquantized policy.
2 RELATED WORK
Reducing neural network resource requirements is an active research topic. Techniques include quantization (Han et al., 2015; 2016; Zhu et al., 2016; Jacob et al., 2018; Lin et al., 2019; Polino et al., 2018; Sakr & Shanbhag, 2018), deep compression (Han et al., 2016), knowledge distillation (Hinton et al., 2015; Chen et al., 2017), sparsification (Han et al., 2016; Alford et al., 2018; Park et al., 2016; Louizos et al., 2018b; Bellec et al., 2017) and pruning (Alford et al., 2018; Molchanov et al., 2016; Li et al., 2016). These methods are employed because they compress to reduce storage and memory requirements as well as enable fast and efficient inference and training with specialized operations. We provide background for these motivations and describe the specific techniques that fall under these categories and motivate why quantization for reinforcement learning needs study.
Compression for Memory and Storage: Techniques such as quantization, pruning, sparsification, and distillation reduce the amount of storage and memory required by deep neural networks. These techniques are motivated by the need to train and deploy neural networks on memoryconstrained environments (e.g., IoT or mobile). Broadly, quantization reduces the precision of network weights (Han et al., 2015; 2016; Zhu et al., 2016), pruning removes various layers and filters of a network (Alford et al., 2018; Molchanov et al., 2016), sparsification zeros out selective network values (Molchanov et al., 2016; Alford et al., 2018) and distillation compresses an ensemble of networks into one (Hinton et al., 2015; Chen et al., 2017). Various algorithms combining these core techniques have been proposed. For example, Deep Compression (Han et al., 2015) demonstrated that a combination of weight-sharing, pruning, and quantization might reduce storage requirements by 35-49x. Importantly, these methods achieve high compression rates at small losses in accuracy by exploiting the redundancy that is inherent within the neural networks.
Fast and Efficient Inference/Training: Methods like quantization, pruning, and sparsification may also be employed to improve the runtime of network inference and training as well as their energy consumption. Quantization reduces the precision of network weights and allows more efficient quantized operations to be used during training and deployment, for example, a ”binary” GEMM (general matrix multiply) operation (Rastegari et al., 2016; Courbariaux et al., 2016). Pruning speeds up neural networks by removing layers or filters to reduce the overall amount of computation necessary to make predictions (Molchanov et al., 2016). Finally, Sparsification zeros out network weights and enables faster computation via specialized primitives like block-sparse matrix multiply (Ren et al., 2018). These techniques not only speed up neural networks but decrease energy consumption by requiring fewer floating-point operations.
Quantization for Reinforcement Learning: Prior work in quantization focuses mostly on quantizing image / supervised models. However, there are several key differences between these models and reinforcement learning policies: an agent’s current input and decision affects its future state and actions, there are many complex algorithms (e.g: DQN, PPO, A2C, DDPG) for training, and there are many diverse tasks. To the best of our knowledge, this is the first work to apply and analyze the performance of quantization across a broad of reinforcement learning tasks and training algorithms.
3 QUANTIZED REINFORCEMENT LEARNING (QUARL)
We develop QuaRL, an open-source software framework that allows us to systematically apply traditional quantization methods to a broad spectrum of deep reinforcement learning models. We use the QuaRL framework to 1) evaluate how effective quantization is at compressing reinforcement learning policies, 2) analyze how quantization affects/is affected by the various environments and training algorithms in reinforcement learning and 3) establish a standard on the performance of quantization techniques across various training algorithms and environments.
Environments: We evaluate quantized models on three different types of environments: OpenAI gym (Brockman et al., 2016), Atari Arcade Learning (Bellemare et al., 2012), and PyBullet (which is an open-source implementation of the MuJoCo). These environments consist of a variety of tasks, including CartPole, MountainCar, LunarLandar, Atari Games, Humanoid, etc. The complete list of environments used in the QuaRL framework is listed in Table 1. Evaluations across this spectrum of different tasks provide a robust benchmark on the performance of quantization applied to different reinforcement learning tasks.
Training Algorithms: We study quantization on four popular reinforcement learning algorithms, namely Advantage Actor-Critic (A2C) (Mnih et al., 2016), Deep Q-Network (DQN) (Mnih et al., 2013), Deep Deterministic Policy Gradients (DDPG) (Lillicrap et al., 2015) and Proximal Policy Optimization (PPO) (Schulman et al., 2017). Evaluating these standard reinforcement learning algorithms that are well established in the community allows us to explore whether quantization is similarly effective across different reinforcement learning algorithms.
Quantization Methods: We apply standard quantization techniques to deep reinforcement learning models. Our main approaches are post-training quantization and quantization aware training. We apply these methods to models trained in different environments by different reinforcement learning algorithms to broadly understand their performance. We describe how these methods are applied in the context of reinforcement learning below.
3.1 POST-TRAINING QUANTIZATION
Post-training quantization takes a trained full precision model (32-bit floating point) and quantizes its weights to lower precision values. We quantize weights down to fp16 (16-bit floating point) and int8 (8-bit integer) values. fp16 quantization is based on IEEE-754 floating point rounding and int8 quantization uses uniform affine quantization.
Fp16 Quantization: Fp16 quantization involves taking full precision (32-bit) values and mapping them to the nearest representable 16-bit float. The IEEE-754 standard specifies 16-bit floats with the format shown below. Bits are grouped to specify the value of the sign (S), fraction (F ) and exponent (E) which are then combined with the following formula to yield the effective value of the float:
Sign Exponent (5 bits) Fraction (10 bits)
Vfp16 = (−1)S × (1 + F
210 )× 2E−15
In subsequent sections, we refer to float16 quantization using the following notation:
Qfp16(W ) = roundfp16(W )
Uniform Affine Quantization: Uniform affine quantization (TensorFlow, 2018b) is applied to a full precision weight matrix and is performed by 1) calculating the minimum and maximum values of the matrix and 2) dividing this range equally into 2n representable values (where n is the number of bits being quantized to). As each representable value is equally spaced across this range, the quantized value can be represented by an integer. More specifically, quantization from full precision to n-bit integers is given by:
Qn(W ) =
⌊ W
δ
⌋ + z where δ =
|min(W, 0)|+ |max(W, 0)| 2n
, z = ⌊ −min(W, 0)
δ ⌋ Note that δ is the gap between representable numbers and z is an offset so that 0 is exactly representable. Further note that we usemin(W, 0) andmax(W, 0) to ensure that 0 is always represented. To dequantize we perform:
D(Wq, δ, z) = δ(Wq − z)
In the context of QuaRL, int8 and fp16 quantization are applied after training a full precision model on an environment, as per Algorithm 1. In post training quantization, uniform quantization is applied to each fully connected layer of the model (per-tensor quantization) and is applied to each channel of convolution weights (per-axis quantization); activations are not quantized. We use post-training quantization to quantize to fp16 and int8 values.
Algorithm 1: Post-Training Quantization for Reinforcement Learning Input: T : task or environment Input: L : reinforcement learning algorithm Input: A : model architecture Input: n : quantize bits (8 or 16) Output: Reward
1 M = Train(T , L, A)
2 Q =
{ Qint8 n = 8
Qfp16 n = 16 3 return Eval(Q(M))
Algorithm 2: Quantization Aware Training for Reinforcement Learning Output: Reward Input: T : task or environment Input: L : reinforcement learning algorithm Input: n : quantize bits Input: A : model architecture Input: Qd : quantization delay
1 Aq = InsertAfterWeightsAndActivations(Qtrainn ) 2 M , TensorMinMaxes =
TrainNoQuantMonitorWeightsActivationsRanges(T , L, Aq , Qd) 3 M = TrainWithQuantization(T , L, M , TensorMinMaxes, Qtrainn ) 4 return Eval(M , Qtrainn , TensorMinMaxes)
3.2 QUANTIZATION AWARE TRAINING
Quantization aware training involves retraining the reinforcement learning policies with weights and activations uniformly quantized to n bit values. Importantly, weights are maintained in full fp32 precision except that they are passed through the uniform quantization function before being used in the forward pass. Because of this, the technique is also known as “fake quantization” (TensorFlow, 2018b). Additionally, to improve training there is an additional parameter, quantization delay (TensorFlow, 2018a), which specifies the number of full precision training steps before enabling quantization. When the number of steps is less than the quantization delay parameter, the minimum and maximum values of weights and activations are actively monitored. Afterwards, the previously
captured minimum and maximum values are used to quantize the tensors (these values remain static from then on). Specifically:
Qtrainn (W,Vmin, Vmax) =
⌊ W
δ
⌋ + z where δ =
|Vmin|+ |Vmax| 2n , z = ⌊ −Vmin δ ⌋ Where Vmin and Vmax are the monitored minimum and maximum values of the tensor (expanding Vmin and Vmax to include 0 if necessary). Intuitively, the expectation is that the training process eventually learns to account for the quantization error, yielding a higher performing quantized model. Note that uniform quantization is applied to fully connected weights in the model (per-tensor quantization) and to each channel for convolution weights (per-axis quantization). n bit quantization is applied to each layer’s weights and activations:
xk+1 = A(Q train n (Wk, Vmin, Vmax)ak + b) where A is the activation function
ak+1 = Q train n (xk+1, Vmin, Vmax)
During backward propagation, the gradient is passed through the quantization function unchanged (also known as the straight-through estimator (Hinton, 2012)), and the full precision weight matrix W is optimized as follows:
∆WQ train n (W,Vmin, Vmax) = I
In context of the QuaRL framework, the policy neural network is retrained from scratch after inserting the quantization functions between weights and activations (all else being equal). At evaluation full precision weights are passed through the uniform affine quantizer to simulate quantization error during inference. Algorithm 2 describes how quantization aware training is applied in QuaRL.
4 RESULTS
In this section, we first show that quantization has regularization effect on reinforcement learning algorithms and can boost exploration. Secondly, We show that reinforcement learning algorithms can be quantized safely without significantly affecting the rewards. To that end, we perform evaluations across the three principal axes of QuaRL: environments, training algorithms, and quantization methods.For post-training quantization, we evaluate each policy for 100 episodes and average the rewards. For Quantization Aware Training (QAT), we train atleast three policies and report the mean rewards over hundred evaluations. Table 1 lists the space of the evaluations explored.
Quantization as Regularization: To further establish the effects of quantization during training, we compare quantization-aware training with traditional regularization techniques (specifically layer-norm (Ba et al., 2016; Kukacka et al., 2017)) and measure the amount of exploration these techniques induce. It has been show in previous literature (Farebrother et al., 2018; Cobbe et al., 2018) that regularization actively helps reinforcement learning training generalize better; here we further reinforce this notion and additionally establish a relationship between quantization, generalization and exploration. We use the variance in action distribution produced by the model as a proxy for exploration: intuitively, since the policy samples from this distribution when performing an action, a policy that produces an action distribution with high variance is less likely to explore different states. Conversely, a low variance action distribution indicates high exploration as the policy is more likely to take a different action than the highest scoring one.
We measure the variance in action distribution produced by differently trained models (QAT2, QAT-4, QAT-6, QAT-8, with layer norm and full precision) at different stages of the training process. We collect model rewards and the action distribution variance over several rollouts with deterministic action selection (model performs the highest scoring action). Importantly, we make sure to use deterministic action selection to ensure that the states reached are similar to the the distribution seen by the model during training. To separate signal from noise, we furthermore smooth the action variances with a smoothing factor of .95 for both rewards and action variances.
Figure 4 shows the variance in action distribution produced by the models at different stages of training. Training with higher quantization levels (e.g: 2 bit vs 4 bit training), like layer norm regularization, induces lower action distribution variance and hence indicates more exploration. Furthermore, figure 4 reward plot shows that despite lower action variance, models trained with quantization achieve a reward similar to the full precision baseline, which indicates that higher exploration is facilitated by quantization and not by a lack of training. Note that quantization is turned on at 5,000,000 steps and we see its effects on the action distribution variance shortly after this point. In summary, data shows that training with quantization, like traditional regularization, in part regularizes reinforcement learning training by facilitating exploration during the training process.
Effectiveness of Quantization: To evaluate the overall effectiveness of quantization for deep reinforcement learning, we apply post-training quantization and quantization aware learning to a spectrum of tasks and record their performance. We present the reward results for post-training quantization in Table 2. We also compute the percentage error of the performance of the quantized policy relative to that of their corresponding full precision baselines (Efp16 and Eint8). Additionally, we report the mean of the errors across tasks for each of the training algorithms.
The absolute mean of 8-bit and 16-bit relative errors ranges between 2% and 5% (with the exception of DQN), which indicates that models may be quantized to 8/16 bit precision without much loss in quality. Interestingly, the overall performance difference between the 8-bit and 16-bit post-training quantization is minimal (with the exception of the DQN algorithm, for reasons we explain in Section 4). We believe this is because the policies weight distribution is narrow enough that 8 bits is able to capture the distribution of weights without much error. In a few cases, post-training quantization yields better scores than the full precision policy. We believe that quantization injected an amount of noise that was small enough to maintain a good policy and large enough to regularize model behavior; this supports some of the results seen by Louizos et al. (2018a); Bishop (1995); Hirose et al. (2018); see appendix for plots showing that there is a sweet spot for post-training quantization.
For quantization aware training, we train the policy with fake-quantization operations while maintaining the same model and hyperparameters (see Appendix). Figure 2 shows the results of quantization aware training on multiple environments and training algorithms to compress the policies down from 8-bits to 2-bits. Generally, the performance relative to the full precision baseline is maintained until 5/6-bit quantization, after which there is a drop in performance. Broadly, at 8-bits, we see no degradation in performance. From the data, we see that quantization aware training achieves higher rewards than post-training quantization and also sometimes outperforms the full precision baseline.
Effect of Environment on Quantization Quality: To analyze the task’s effect on quantization quality we plot the distribution of weights of full precision models trained in three environments (Breakout, Beamrider and Pong) and their error after applying 8-bit post-training quantization on them. Each model uses the same network architecture, is trained using the same algorithm (DQN) with the same hyperparameters (see Appendix).
Figure 3 shows that the task with the highest error (Breakout) has the widest weight distribution, the task with the second-highest error (BeamRider) has a narrower weight distribution, and the task with the lowest error (Pong) has the narrowest distribution. With an affine quantizer, quantizing a narrower distribution yields less error because the distribution can be captured at a fine granularity; conversely, a wider distribution requires larger gaps between representable numbers and thus increases quantization error. The trends indicate the environment affects models’ weight distribution spread which affects quantization performance: specifically, environ-
ments that yield a wider distribution of model weights are more difficult to apply quantization to. This observation suggests that regularizing the training process may yield better performance.
Algorithm Environment fp32 Reward Eint8 Efp16 DQN Breakout 214 63.55% -1.40% PPO Breakout 400 8.00% 0.00% A2C Breakout 379 7.65% 2.11%
Table 3: Rewards for DQN, PPO, and A2C.
-2.0 -1.5 -1.0 -0.5 0.0 0.5 1.0
weight
Fr eq
ue nc
y
105 103 101 Min Weight: -2.21 Max Weight: 1.31
Min Weight: -1.02 Max Weight: 0.58
Min Weight: -0.79 Max Weight: 0.72
DQN
PPO
A2C
105 103 101
105 103 101
Fr eq
ue nc y Fr eq ue nc y
-2.0 -1.5 -1.0 -0.5 0.0 0.5 1.0
-2.0 -1.5 -1.0 -0.5 0.0 0.5 1.0
Figure 4: Weight distributions for the policies trained using DQN, PPO and A2C. DQN policy weights are more spread out and more difficult to cover effectively by 8-bit quantization (yellow lines). This explains the higher quantization error for DQN in Table 3. A negative error indicates that the quantized model outperformed the full precision baseline.
Effect of Training Algorithm on Quantization Quality: To determine the effects of the reinforcement learning training algorithm on the performance of quantized models, we compare the performance of post-training quantized models trained by various algorithms. Table 3 shows the error of different reinforcement learning algorithms and their corresponding 8-bit post-training quantization error for the Atari Breakout game. Results indicate that the A2C training algorithm is most conducive to int8 post-training quantization, followed by PPO2 and DQN. Interestingly, we see a sharp performance drop compared to the corresponding full precision baseline when applying 8-bit post-training quantization to models trained by DQN. At 8 bits, models trained by PPO2 and A2C have relative errors of 8% and 7.65%, whereas the model trained by DQN has an error of ∼64%. To understand this phenomenon, we plot the distribution of model weights trained by each algorithm, shown in Figure 4. The plot shows that the weight distribution of the model trained by DQN is significantly wider than those trained by PPO2 and A2C. A wider distribution of weights indicates a higher quantization error, which explains the large error of the 8-bit quantized DQN model. This also explains why using more bits (fp16) is more effective for the model trained by DQN (which reduces error relative to the full precision baseline from ∼64% down to ∼-1.4%). These results signify that the choice of RL algorithms (on-policy vs off-policy) have different objective functions and hence can result in a completely different weight distribution. A wider distribution has more pronounced impact on the quantization error.
5 CASE STUDIES
To show the usefulness of our results, we use quantization to optimize the training and deployment of reinforcement learning policies. We 1) train a pong model 1.5× faster by using mixed precision optimization and 2) deploy a quantized robot navigation model onto a resource constrained embedded system (RasPi-3b), demonstrating 4× reduction in memory and an 18× speedup in inference. Faster training time means running more experiments for the same time. Achieving speedup on resource-constrained devices enables deployment of the policies on real robots.
Mixed/Half-Precision Training: Motivated by that reinforcement learning training is robust to quantization error, we train three policies of increasing model complexity (Policy A, Policy B, and Policy C) using mixed precision training and compare its performance to that of full precision training (see Appendix for details). In mixed precision training, the policy weights, activations, and gradients are represented in fp16. A master copy of the weights are stored in full precision (fp32) and updates are made to it during backward pass (Micikevicius et al., 2017). We measure the runtime and convergence rate of both full precision and mixed precision training (see Appendix).
Algorithm NetworkParameter fp32
Runtime (min)
MP Runtime
(min) Speedup
DQN-Pong Policy A 127 156 0.87× Policy B 179 172 1.04× Policy C 391 242 1.61×
Table 4: Mixed precision training for reinforcement learning.
0 200k 400k 600k 800k 1M
20 fD 10 F 0 fD -10 fD -20
Policy A Policy B Policy C
Mixed Precision Fp32 Only
step
Re w
ar d
20 fD 10 F 0 fD -10 fD -20
20 fD 10 F 0 fD -10 fD -20 Mixed Precision Fp32 Only
Mixed Precision Fp32 Only
0 200k 400k 600k 800k 1M step 0 200k 400k 600k 800k 1M step
Figure 5: Mixed precision v/s fp32 training rewards.
Figure 5 shows that all three policies converge under full precision and mixed precision training. Interestingly, for Policy B, training with mixed precision yields faster convergence; we believe that
some amount of quantization error speeds up the training process. Table 5 shows the computational speedup to the training loop by using mixed precision training. While using mixed precision training on smaller networks (Policy A) may slow down training iterations (as overhead of doing fp32 to fp16 conversions outweigh the speedup of low precision ops), larger networks (Policy C) show up to a 60% speedup. Generally, our results show that mixed precision may speed up the training process by up to 1.6× without harming convergence. Quantized Policy for Deployment: To show the benefits of quantization in deploying of reinforcement learning policies, we train multiple point-to-point navigation models (Policy I, II, and III) for aerial robots using Air Learning (Krishnan et al., 2019) and deploy them onto a RasPi-3b, a cost effective, general-purpose embedded processor. RasPi-3b is used as proxy for the compute platform for the aerial robot. Other platforms on aerial robots have similar characteristics. For each of these policies, we report the accuracies and inference speedups attained by the int8 and fp32 policies.
Table 5 shows the accuracies and inference speedups attained for each corresponding quantized policy. We see that quantizing smaller policies (Policy I) yield moderate inference speedups (1.18× for Policy I), while quantizing larger models (Policies II, III) can speed up inference by up to 18×. This speed up in policy III execution times results in speeding-up the generation of the hardware actuation commands from 5 Hz (fp32) to 90 Hz (int8). Note that in this experiment we quantize both weights and activations to 8-bit integers; quantized models exhibit a larger loss in accuracy as activations are more difficult to quantize without some form of calibration to determine the range to quantize activation values to (Choi et al., 2018).
A deeper investigation shows that Policies II and III take more memory than the total RAM capacity of the RasPi-3b, causing numerous accesses to swap memory (refer to Appendix) during inference (which is extremely slow). Quantizing these policies allow them to fit into the RasPi’s RAM, eliminating accesses to swap and boosting performance by an order of magnitude. Figure 5 shows the memory usage while executing the quantized and unquantized version of Policy III, and shows how without quantization memory usage skyrockets above the total RAM capacity of the board.
In context of real-world deployment of an aerial (or any other type of) robot, a speedup in policy execution potentially translates to faster actuation commands to the aerial robot – which in turn implies faster and better responsiveness in a highly dynamic environment (Falanga et al., 2019). Our case study demonstrates how quantization can facilitate the deployment of a accurate policies trained using reinforcement learning onto a resource constrained platform.
6 CONCLUSION
We perform the first study of quantization effects on deep reinforcement learning using QuaRL, a software framework to benchmark and analyze the effects of quantization on various reinforcement learning tasks and algorithms. We analyze the performance in terms of rewards for post-training quantization and quantization aware training as applied to multiple reinforcement learning tasks and algorithms with the high level goal of reducing policies’ resource requirements for efficient training and deployment. We broadly demonstrate that reinforcement learning models may be quantized down to 8/16 bits without loss of performance. Also, we link quantization performance to the distribution of models’ weights, demonstrating that some reinforcement learning algorithms and tasks are more difficult to quantize due to their effect of widening the models’ weight distribution. Additionally, we show that quantization during training acts as a regularizer which improve exploration. Finally, we apply our results to optimize the training and inference of reinforcement learning models, demonstrating a 50% training speedup for Pong using mixed precision optimization and up to a 18x inference speedup on a RasPi by quantizing a navigation policy. In summary, our findings
indicate that there is much potential for the future of quantization of deep reinforcement learning policies.
A POST TRAINING QUANTIZATION RESULTS
Here we tabulate the post training quantization results listed in Table 2 into four separate tables for clarity. Each table corresponds to post training quantization results for a specific algorithm. Table 5 tabulates the post training quantization for A2C algorithm. Likewise, Table 6 tabulates the post training quantization results for DQN. Table 7 and Table 8 lists the post training quantization results for PPO and DDPG algorithms respectively.
B DQN HYPERPARAMETERS FOR ATARI
For all Atari games in the results section we use a standard 3 Layer Conv (128) + 128 FC. Hyperparameters are listed in Table 9. We use stable-baselines (Hill et al., 2018) for all the reinforcement learning experiments. We use Tensorflow version 1.14 as the machine learning backend.
C MIXED PRECISION HYPERPARAMETERS
In mixed precision training, we used three policies namely Policy A, Policy B and Policy C respectively. The policy architecture for these policies are tabulated in Table 10. For measuring the runtimes for fp32 adn fp16 training, we use the time Linux command for each run and add the usr and sys times to measure the runtimes for both mixed-precision training and fp32 training. The hyperparameters used for training DQN-Pong agent is listed in Table 9.
D QUANTIZED POLICY DEPLOYMENT
Here we describe the methodology used to train a point to point navigation policy in Air Learning and deploy it on an embedded compute platform such as Ras-Pi 3b+. Air Learning is an AI research platform that provides infrastructure components and tools to train a fully functional reinforcement learning policies for aerial robots. In simple environments like OpenAI gym, Atari the training and inference happens in the same environment without any randomization. In contrast to these environments, Air Learning allows us to randomize various environmental parameters such as such as arena size, number of obstacles, goal position etc.
In this study, we fix the arena size to 25 m × 25 m × 20 m. The maximum number of obstacles at anytime would be anywhere between one to five and is chosen randmonly on episode to episode basis. The position of these obstacles and end point (goal) are also changed every episode. We train the aerial robot to reach the end point using DQN algorithm. The input to the policy is sensor mounted on the drone along with IMU measurements. The output of the policy is one among the 25 actions with different velocity and yaw rates. The reward function we use in this study is defined based on the following equation:
r = 1000 ∗ α− 100 ∗ β −Dg −Dc ∗ δ − 1 (1)
Here, α is a binary variable whose value is ‘1’ if the agent reaches the goal else its value is ‘0’. β is a binary variable which is set to ‘1’ if the aerial robot collides with any obstacle or runs out of the maximum allocated steps for an episode.1 Otherwise, β is ’0’, effectively penalizing the agent for hitting an obstacle or not reaching the end point in time. Dg is the distance to the end point from the agent’s current location, motivating the agent to move closer to the goal.Dc is the distance correction which is applied to penalize the agent if it chooses actions which speed up the agent away from the goal. The distance correction term is defined as follows:
Dc = (Vmax − Vnow) ∗ tmax (2) Vmax is the maximum velocity possible for the agent which for DQN is fixed at 2.5 m/s. Vnow is the current velocity of the agent and tmax is the duration of the actuation.
We train three policies namely Policy I, Policy II, and Policy III. Each policy is learned through curriculum learning where we make the end goal farther away as the training progresses. We terminate the training once the agent has finished 1 Million steps. We evaluate the all the three policies in fp32 and quantized int8 data types for 100 evaluations in airlearning and report the success rate.
1We set the maximum allowed steps in an episode as 750. This is to make sure the agent finds the end-point (goal) within some finite amount of steps.
We also take these policies and characterize the system performance on a Ras-pi 3b platform. Ras-Pi 3b is a proxy for the compute platform available on the aerial robot. The hardware specification for Ras-Pi 3b is shown in Table 11.
We allocate a region of storage space as swap memory. It is the region of memory allocated in disk that is used when system memory is utilized fully by a process. In Ras-Pi 3b, the swap memory is allocated in Flash storage.
E POST-TRAINING QUANTIZATION SWEET SPOT
Figures 7 shows that there is a sweet spot for post-training quantization. Sometimes, quantizing to fewer bits outperforms higher precision quantization. Each plot was generated by applying posttraining quantization to the full precision baselines and evaluating over 10 runs. | 1. What are the main contributions and findings of the paper regarding the impact of quantization on reinforcement learning tasks?
2. What are the strengths and weaknesses of the experimental evaluations conducted in the paper?
3. How does the reviewer assess the trade-off between accuracy and efficiency presented in the work?
4. Are there any open questions or areas for further research related to the application of quantization in reinforcement learning?
5. What are the minor suggestions or typos pointed out by the reviewer? | Review | Review
This paper studies the effect of quantization on training reinforcement learning tasks. Specifically, the paper applies post-training quantization and quantization aware learning to various tasks and record the effects on accuracy and training speed.
Overall, the empirical evaluations suggest that quantization does not significantly hurt the performance of RL training among a wide range of tasks. On several tasks, the authors showed that quantization can significantly reduce memory usage and speed up the inference time. On the other hand, the improved efficiency comes at the cost of accuracy or lower rewards (2% - 5% error as shown in section 4) and (> 5% in terms of success rate as shown in Figure 5).
While it is expected that quantization should decrease the accuracy of the trained model, it is not entirely clear how one should evaluate the trade-off presented in the work. Some natural questions that I believe deserve more discussions are:
-- Are the kinds of accuracy cost the best one could hope for using these methods?
-- Is there still room for improvement in terms of reducing the cost of accuracy?
Detailed comments:
-- In the definition of Q_n(W): isn't $\delta$ equal to |W| / 2^n?
-- In Figure 5: your results show that the "int8" method has a significantly lower success rate than "fp32". Could you provide some discussion as to why this is the case?
-- Typos: Page 4, "is a applied"; Page 5, "full connected weights"; Page 8, "of a accurate". |
ICLR | Title
Fair Attribute Completion on Graph with Missing Attributes
Abstract
Tackling unfairness in graph learning models is a challenging task, as the unfairness issues on graphs involve both attributes and topological structures. Existing work on fair graph learning simply assumes that attributes of all nodes are available for model training and then makes fair predictions. In practice, however, the attributes of some nodes might not be accessible due to missing data or privacy concerns, which makes fair graph learning even more challenging. In this paper, we propose FairAC, a fair attribute completion method, to complement missing information and learn fair node embeddings for graphs with missing attributes. FairAC adopts an attention mechanism to deal with the attribute missing problem and meanwhile, it mitigates two types of unfairness, i.e., feature unfairness from attributes and topological unfairness due to attribute completion. FairAC can work on various types of homogeneous graphs and generate fair embeddings for them and thus can be applied to most downstream tasks to improve their fairness performance. To our best knowledge, FairAC is the first method that jointly addresses the graph attribution completion and graph unfairness problems. Experimental results on benchmark datasets show that our method achieves better fairness performance with less sacrifice in accuracy, compared with the state-of-the-art methods of fair graph learning. Code is available at: https://github.com/donglgcn/FairAC.
1 INTRODUCTION
Graphs, such as social networks, biomedical networks, and traffic networks, are commonly observed in many real-world applications. A lot of graph-based machine learning methods have been proposed in the past decades, and they have shown promising performance in tasks like node similarity measurement, node classification, graph regression, and community detection. In recent years, graph neural networks (GNNs) have been actively studied (Scarselli et al., 2008; Wu et al., 2020; Jiang et al., 2019; 2020; Zhu et al., 2021c;b;a; Hua et al., 2020; Chu et al., 2021), which can model graphs with high-dimensional attributes in the non-Euclidean space and have achieved great success in many areas such as recommender systems (Sheu et al., 2021). However, it has been observed that many graphs are biased, and thus GNNs trained on the biased graphs may be unfair with respect to certain sensitive attributes such as demographic groups. For example, in a social network, if the users with the same gender have more active connections, the GNNs tend to pay more attention to such gender information and lead to gender bias by recommending more friends to a user with the same gender identity while ignoring other attributes like interests. And from the data privacy perspective, it is possible to infer one’s sensitive information from the results given by GNNs (Sun et al., 2018). In a time when GNNs are widely deployed in the real world, this severe unfairness is unacceptable. Thus, fairness in graph learning emerges and becomes notable very recently.
Existing work on fair graph learning mainly focuses on the pre-processing, in-processing, and postprocessing steps in the graph learning pipeline in order to mitigate the unfairness issues. The preprocessing approaches modify the original data to conceal sensitive attributes. Fairwalk (Rahman et al., 2019) is a representative pre-processing method, which enforces each group of neighboring nodes an equal chance to be chosen in the sampling process. In many in-processing methods, the most popular way is to add a sensitive discriminator as a constraint, in order to filter out sensitive information from original data. For example, FairGNN (Dai & Wang, 2021) adopts a sensitive
classifier to filter node embeddings. CFC (Bose & Hamilton, 2019) directly adds a filter layer to deal with unfairness issues. The post-processing methods directly force the final prediction to satisfy fairness constraints, such as (Hardt et al., 2016).
When the graphs have complete node attributes, existing fair graph learning methods could obtain promising performance on both fairness and accuracy. However, in practice, graphs may contain nodes whose attributes are entirely missing due to various reasons (e.g., newly added nodes, and data privacy concerns). Taking social networks as an example, a newly registered user may have incomplete profiles. Given such incomplete graphs, existing fair graph learning methods would fail, as they assume all the nodes have attributes for model training. Although FairGNN (Dai & Wang, 2021) also involves the missing attribute problem, it only assumes that a part of the sensitive attributes are missing. To the best of our knowledge, addressing the unfairness issue on graphs with some nodes whose attributes are entirely missing has not been investigated before. Another relevant topic is graph attribute completion (Jin et al., 2021; Chen et al., 2020). It mainly focuses on completing a precise graph but ignores the unfairness issues. In this work, we aim to jointly complete a graph with missing attributes and mitigate unfairness at both feature and topology levels.
In this paper, we study the new problem of learning fair embeddings for graphs with missing attributes. Specifically, we aim to address two major challenges: (1) how to obtain meaningful node embeddings for graphs with missing attributes, and (2) how to enhance fairness of node embeddings with respect to sensitive attributes. To address these two challenges, we propose a Fair Attribute Completion (FairAC) framework. For the first challenge, we adopt an autoencoder to obtain feature embeddings for nodes with attributes and meanwhile we adopt an attention mechanism to aggregate feature information of nodes with missing attributes from their direct neighbors. Then, we address the second challenge by mitigating two types of unfairness, i.e., feature unfairness and topological unfairness. We adopt a sensitive discriminator to regulate embeddings and create a bias-free graph.
The main contributions of this paper are as follows: (1) We present a new problem of achieving fairness on a graph with missing attributes. Different from the existing work, we assume that the attributes of some nodes are entirely missing. (2) We propose a new framework, FairAC, for fair graph attribute completion, which jointly addresses unfairness issues from the feature and topology perspectives. (3) FairAC is a generic approach to complete fair graph attributes, and thus can be used in many graph-based downstream tasks. (4) Extensive experiments on benchmark datasets demonstrate the effectiveness of FairAC in eliminating unfairness and maintaining comparable accuracy.
2 RELATED WORK
2.1 FAIRNESS IN GRAPH LEARNING
Recent work promotes fairness in graph-based machine learning (Bose & Hamilton, 2019; Rahman et al., 2019; Dai & Wang, 2021; Wang et al., 2022). They can be roughly divided into three categories, i.e., the pre-processing methods, in-processing methods, and post-processing methods.
The pre-processing methods are applied before training downstream tasks by modifying training data. For instance, Fairwalk (Rahman et al., 2019) improves the sampling procedure of node2vec (Grover & Leskovec, 2016). Our FairAC framework can be viewed as a pre-processing method, as it seeks to complete node attributes and use them as input of graph neural networks. However, our problem is much harder than existing problems, because the attributes of some nodes in the graph are entirely missing, including both the sensitive ones and non-sensitive ones. Given an input graph with missing attributes, FairAC generates fair and complete feature embeddings and thus can be applied to many downstream tasks, such as node classification, link prediction (LibenNowell & Kleinberg, 2007; Taskar et al., 2003), PageRank (Haveliwala, 2003), etc. Graph learning models trained on the refined feature embeddings would make fair predictions in downstream tasks.
There are plenty of fair graph learning methods as in-processing solutions. Some work focus on dealing with unfairness issues on graphs with complete features. For example, GEAR (Ma et al., 2022) mitigates graph unfairness by counterfactual graph augmentation and an adversarial learning method to learn sensitive-invariant embeddings. However, in order to generate counterfactual subgraphs, they need precise and entire features for every node. In other words, it cannot work well if it encounters a graph with full missing nodes since it cannot generate counterfactual subgraph based
on a blank node. But we can deal with the situation. The most related work is FairGNN (Dai & Wang, 2021). Different from the majority of problem settings on graph fairness. It learns fair GNNs for node classification in a graph where only a limited number of nodes are provided with sensitive attributes. FairGNN adopts a sensitive classifier to predict the missing sensitive labels. After that, it employs a classic adversarial model to mitigate unfairness.Specifically, a sensitive discriminator aims to predict the known or estimated sensitive attributes, while a GNN model tries to fool the sensitive discriminator and meanwhile predicts node labels. However, it cannot predict sensitive information if a node misses all features in the first place and thus will fail to achieve its final goal. Our FairAC can get rid of the problem because we recover the node embeddings from their neighbors. FairAC learns attention between neighbors according to existing full attribute nodes, so we can recover the node embeddings for missing nodes from their neighbors by aggregating the embeddings of neighbors. With the help of the adversarial learning method, it can also remove sensitive information. In addition to attribute completion, we have also designed novel de-biasing strategies to mitigate feature unfairness and topological unfairness.
2.2 ATTRIBUTION COMPLETION ON GRAPHS
The problem of missing attributes is ubiquitous in reality. Several methods (Liao et al., 2016; You et al., 2020; Chen et al., 2020; He et al., 2022; Jin et al., 2021; 2022; Tu et al., 2022; Taguchi et al., 2021) have been proposed to address this problem. GRAPE (You et al., 2020) tackles the problem of missing attributes in tabular data using a graph-based approach. SAT (Chen et al., 2020) assumes that the topology representation and attributes share a common latent space, and thus the missing attributes can be recovered by aligning the paired latent space. He et al. (2022) and Jin et al. (2021) extend such problem settings to heterogeneous graphs. HGNN-AC (Jin et al., 2021) is an end-to-end model, which does not recover the original attributes but generates attribute representations that have sufficient information for the final prediction task. It is worth noting that existing methods on graph attribute completion only focus on the attribute completion accuracy or performance of downstream tasks, but none of them takes fairness into consideration. Instead, our work pays attention to the unfairness issue in graph learning, and we aim to generate fair feature embeddings for each node by attribute completion, which contain the majority of information inherited from original attributes but disentangle the sensitive information.
3 METHODOLOGY
3.1 PROBLEM DEFINITION
Let G = (V, E ,X ) denote an undirected graph, where V = {v1, v2, ..., vN} is the set of N nodes, E ⊆ V × V is the set of undirected edges in the graph, X ∈ RN×D is the node attribute matrix, and D is the dimension of attributes. A ∈ RN×N is the adjacency matrix of the graph G, where Aij = 1 if nodes vi and vj are connected; otherwise, Aij = 0. In addition, S = {s1, s2, ..., sN}
Algorithm 1 FairAC framework algorithm Input: G = (V, E ,X ), S Output: Autoencoder fAE , Sensitive classifier Cs, Attribute completion fAC
1: Obtain topological embedding T with DeepWalk 2: repeat 3: Obtain the feature embeddings H with fAE 4: Optimize the Cs by Equation 6 5: Optimize fAE to mitigate feature unfairness by loss LF 6: Divide V+ into Vkeep and Vdrop based on α 7: Obtain the feature embeddings of nodes with missing attributes Vdrop by fAC 8: Optimize fAC to achieve attribute completion by loss LC 9: Optimize fAC to mitigate topological unfairness by loss LT
10: until convergence 11: return fAE , Cs, fAC
denotes a set of sensitive attributes (e.g., age or gender) of N nodes, and Y = {y1, y2, ..., yN} denotes the node labels. The goal of fair graph learning is to make fair predictions of node labels with respect to the sensitive attribute, which is usually measured by certain fairness notations like statistical parity (Dwork et al., 2012) and equal opportunity (Hardt et al., 2016). Statistical Parity and Equal Opportunity are two group fairness definitions. Their detailed formulations are presented below. The label y denotes the ground-truth node label, and the sensitive attribute s indicates one’s sensitive group. For example, for binary node classification task, y only has two labels. Here we consider two sensitive groups, i.e. s ∈ {0, 1}.
• Statistical Parity (Dwork et al., 2012). It refers to the equal acceptance rate, which can be formulated as:
P (ŷ|s = 0) = P (ŷ|s = 1), (1) where P (·) denotes the probability that · occurs.
• Equal Opportunity (Hardt et al., 2016). It means the probability of a node in a positive class being classified as a positive outcome should be equal for both sensitive group nodes. It mathematically requires an equal true positive rate for each subgroup.
P (ŷ = 1|y = 1, s = 0) = P (ŷ = 1|y = 1, s = 1). (2)
In this work, we mainly focus on addressing unfairness issues on graphs with missing attributes, i.e., attributes of some nodes are totally missing. Let V+ denote the set of nodes whose attributes are available, and V− denote the set of nodes whose attributes are missing, V = {V+,V−}. If vi ∈ V−, both Xi and si are unavailable during model training. With the notations given below, the fair attribute completion problem is formally defined as:
Problem 1. Given a graph G = (V, E ,X ), where node set V+ ∈ V with the corresponding attributes available and the corresponding sensitive attributes in S, learn a fair attribute completion model to generate fair feature embeddings H for each node in V , i.e.,
f(G, S) → H, (3)
where f is the function we aim to learn. H should exclude any sensitive information while preserve non-sensitive information.
3.2 FAIR ATTRIBUTE COMPLETION (FAIRAC) FRAMEWORK
We propose a fair attribute completion (FairAC) framework to address Problem 1. Existing fair graph learning methods tackle unfairness issues by training fair graph neural networks in an endto-end fashion, but they cannot effectively handle graphs that are severely biased due to missing attributes. Our FairAC framework, as a data-centric approach, deals with the unfairness issue from a new perspective, by explicitly debiasing the graph with feature unfairness mitigation and fairnessaware attribute completion. Eventually, FairAC generates fair embeddings for all nodes including the ones without any attributes. The training algorithms are shown in Algorithm 1.
To train the graph attribute completion model, we follow the setting in (Jin et al., 2021) and divide the nodes with attributes (i.e., V+) into two sets: Vkeep and Vdrop. For nodes in Vkeep, we keep their attributes, while for nodes in Vdrop, we temporally drop their attributes and try to recover them using our attribute completion model. Although the nodes are randomly assigned to Vkeep and Vdrop, the proportion of Vdrop is consistent with the attribute missing rate α of graph G, i.e., α = |V
−| |V| = |Vdrop| |V+| .
Different from existing work on fair graph learning, we consider unfairness from two sources. The first one is from node features. For example, we can roughly infer one’s sensitive information, like gender, from some non-sensitive attributes like hobbies. It means that non-sensitive attributes may imply sensitive attributes and thus lead to unfairness in model prediction. We adopt a sensitive discriminator to mitigate feature unfairness. The other source is topological unfairness introduced by graph topological embeddings and node attribute completion. To deal with the topological unfairness, we force the estimated feature embeddings to fool the sensitive discriminator, by updating attention parameters during the attribute completion process.
As illustrated in Figure 1, our FairAC framework first mitigates feature unfairness for nodes with attributes (i.e., Vkeep) by removing sensitive information implicitly contained in non-sensitive attributes with an auto-encoder and sensitive classifier (Section 3.2.1). For nodes without features (i.e., Vdrop), FairAC performs attribute completion with an attention mechanism (Section 3.2.2) and meanwhile mitigates the topological unfairness (Section 3.2.3). Finally, the FairAC model trained on Vkeep and Vdrop can be used to infer fair embeddings for nodes in V−. The overall loss function of FairAC is formulated as:
L = LF + LC + βLT , (4)
where LF represents the loss for mitigating feature unfairness, LC is the loss for attribute completion, and LT is the loss for mitigating topological unfairness. β is a trade-off hyperparameter.
3.2.1 MITIGATING FEATURE UNFAIRNESS
The nodes in Vkeep have full attributes X , while some attributes may implicitly encode information about sensitive attributes S and thus lead to unfair predictions. To address this issue, FairAC aims to encode the attributes X⟩ of node i into a fair feature embedding Hi. Specifically, we use a simple autoencoder framework together with a sensitive classifier. The autoencoder maps Xi into embedding Hi, and meanwhile the sensitive classifier Cs is trained in an adversarial way, such that the embeddings are invariant to sensitive attributes.
Autoencoder. The autoencoder contains an encoder fE and a decoder fD. fE encodes the original attributes Xi to feature embeddings Hi, i.e., Hi = fE(Xi), and fD reconstructs attributes from the latent embeddings, i.e., X̂i = fD(Hi), where the reconstructed attributes X̂ should be close to Xi as possible. The loss function of the autoencoder is written as:
Lae = 1 |Vkeep| ∑
i∈Vkeep|
√ (X̂i −Xi)2. (5)
Sensitive classifier The sensitive classifier Cs is a simple multilayer perceptron (MLP) model. It takes the feature embedding Hi as input and predicts the sensitive attribute ŝi, i.e., ŝi = Cs(Hi). When the sensitive attributes are binary, we can use the binary cross entropy loss to optimize Cs:
LCs = − 1 |Vkeep| ∑
i∈Vkeep
si log ŝi + (1− si) log (1− ŝi). (6)
With the sensitive classifier Cs, we could leverage it to adversarially train the autoencoder, such that fE is able to generate fair feature embeddings that can fool Cs. The loss LF is written as: LF = Lae − βLCs .
3.2.2 COMPLETING NODE EMBEDDINGS VIA ATTENTION MECHANISM
For nodes without attributes (Vdrop), FairAC makes use of topological embeddings and completes the node embeddings Hdrop with an attention mechanism.
Topological embeddings. Recent studies reveal that the topology of graphs has similar semantic information as the attributes (Chen et al., 2020; McPherson et al., 2001; Pei et al., 2020; Zhu et al., 2020). Inspired by this observation, we assume that the nodes’ topological information can reflect the relationship between nodes’ attributes and the attributes of their neighbors. There are a lot of off-the-shelf node topological embedding methods, such as DeepWalk (Perozzi et al., 2014) and node2vec (Grover & Leskovec, 2016). For simplicity, we adopt the DeepWalk method to extract topological embeddings for nodes in V .
Attention mechanism. For graphs with missing attributes, a commonly used strategy is to use average attributes of the one-hop neighbors. This strategy works in some cases, however, simply averaging information from neighbors might be biased, as the results might be dominated by some high-degree nodes. In fact, different neighbors should have varying contributions to the aggregation process in the context of fairness. To this end, FairAC adopts an attention mechanism (Vaswani et al., 2017) to learn the influence of different neighbors or edges with the awareness of fairness, and then aggregates attributes information for nodes in Vdrop. Given a pair of nodes (u, v) which are neighbors, the contribution of node v is the attention attu,v , which is defined as: attu,v = Attention(Tu, Tv), where Tu, Tv are the topological embeddings of nodes u and v, respectively. Specifically, we only focus on the neighbor pairs and ignore those node pairs that are not directly connected. Attention(·, ·) denotes the attention between two topological embeddings, i.e., Attention(Tu, Tv) = σ(TTu WTv), where W is the learnable parametric matrix, and σ is an activation function. After we get all the attention scores between one node and its neighbors, we can get the coefficient of each pair by applying the softmax function:
cu,v = softmax(attu,v) = exp(attu,v)∑
s∈Nu exp(attu,s) , (7)
where cu,v is the coefficient of node pair (u, v), and Nu is the set of neighbors of node u. For node u, FairAC calculates its feature embedding Ĥu by the weighted aggregation with multi-head attention:
Ĥu = 1
K K∑ k=1 ∑ s∈Nu cu,sHs, (8)
where K is the number of attention heads. The loss for attribute completion with topological embedding and attention mechanism is formulated as:
LC = 1 |Vdrop| ∑
i∈Vdrop|
√ (Ĥi −Hi)2. (9)
3.2.3 MITIGATING TOPOLOGICAL UNFAIRNESS
The attribute completion procedure may introduce topological unfairness since we assume that topology information is similar to attributes relation. It is possible that the completed feature embeddings of Vdrop would be unfair with respect to sensitive attributes S. To address this issue, FairAC leverages sensitive classifier Cs to help mitigate topological unfairness by further updating the attention parameter matrix W and thus obtaining fair feature embeddings H. Inspired by (Gong et al., 2020), we expect that the feature embeddings can fool the sensitive classifier Cs to predict the probability distribution close to the uniform distribution over the sensitive category, by minimizing the loss:
LT = − 1 |Vdrop| ∑
i∈Vdrop
si log ŝi + (1− si) log (1− ŝi). (10)
3.3 FAIRAC FOR NODE CLASSIFICATION
The proposed FairAC framework could be viewed as a generic data debiasing approach, which achieves fairness-aware attribute completion and node embedding for graphs with missing attributes. It can be easily integrated with many existing graph neural networks (e.g., GCN (Kipf & Welling, 2016), GAT (Veličković et al., 2018), and GraphSAGE (Hamilton et al., 2017)) for tasks like node classification. In this work, we choose the basic GCN model for node classification and assess how FairAC enhances model performance in terms of accuracy and fairness.
4 EXPERIMENTS
In this section, we evaluate the performance of the proposed FairAC framework on three benchmark datasets in terms of node classification accuracy and fairness w.r.t. sensitive attributes. We compare FairAC with other baseline methods in settings with various sensitive attributes or different attribute missing rates. Ablation studies are also provided and discussed.
4.1 DATASETS AND SETTINGS
Datasets. In the experiments, we use three public graph datasets, NBA, Pokec-z, and Pokec-n. A detailed description is shown in supplementary materials.
Baselines. We compare our FairAC method with the following baseline methods: GCN (Kipf & Welling, 2016), ALFR (Edwards & Storkey, 2015), ALFR-e, Debias (Zhang et al., 2018), Debiase, FCGE (Bose & Hamilton, 2019), and FairGNN (Dai & Wang, 2021). ALFR-e concatenates the feature embeddings produced by ALFR with topological embeddings learned by DeepWalk (Perozzi et al., 2014). Debias-e also concatenates the topological embeddings learned by DeepWalk with feature embeddings learned by Debias. FairGNN is an end-to-end debias method which aims to mitigate unfairness in label prediction task. GCN and FairGNN uses the average attribute completion method, while other baselines use original complete attributes.
Evaluation Metrics. We evaluate the proposed framework with respect to two aspects: classification performance and fairness performance. For classification, we use accuracy and AUC scores.As for fairness, we adopt ∆SP and ∆EO as evaluation metrics, which can be defined as:
∆SP = P (ŷ|s = 0)− P (ŷ|s = 1), (11)
∆EO = P (ŷ = 1|y = 1, s = 0)− P (ŷ = 1|y = 1, s = 1). (12) The smaller ∆SP and ∆EO are, the more fair the model is. In addition, we use ∆SP+∆EO as an overall indicator of a model’s performance on fairness.
4.2 RESULTS AND ANALYSIS
4.2.1 UNFAIRNESS ISSUES IN GRAPH NEURAL NETWORKS
According to the results showed in Table 1, they reveal several unfairness issues in Graph Neural Networks. We divided them into two categories.
• Feature unfairness Feature unfairness is that some non-sensitive attributes could infer sensitive information. Hence, some Graph Neural Networks may learn this relation and make unfair prediction. In most cases, ALFR and Debias and FCGE have better fairness performance than GCN method. It is as expected because the non-sensitive features may contain proxy variables of sensitive attributes which would lead to biased prediction. Thus, ALFR and Debias methods that try to break up these connections are able to mitigate feature unfairness and obtain better fairness performance. These results further prove the existence of feature unfairness.
• Topological unfairness Topological unfairness is sourced from graph structure. In other words, edges in graph, i.e. the misrepresentation due to the connection(Mehrabi et al., 2021) can bring topological unfairness. From the experiments, ALFR-e and Debias-e have worse fairness performance than ALFR and Debias, respectively. It shows that although graph structure can improve the classification performance, it will bring topological unfairness consequently. The worse performance on fairness verifies that topological unfairness exists in GNNs and graph topological information could magnify the discrimination.
4.2.2 EFFECTIVENESS OF FAIRAC ON MITIGATING FEATURE AND TOPOLOGICAL UNFAIRNESS
The results of our FairAC method and baselines in terms of the node classification accuracy and fairness metrics on three datasets are shown in Table 1. The best results are shown in bold. Generally speaking, we have the following observations. (1). The proposed method FairAC shows comparable classification performance with these baselines, GCN and FairGNN. This suggests that our attribute completion method is able to preserve useful information contained in the original attributes. (2).
FairAC outperforms all baselines regarding fairness metrics, especially in ∆SP+∆EO. FairAC outperform baselines that focus on mitigate feature fairness, like ALFR, which proves that FairAC also mitigate topological unfairness. Besides, it is better than those who take topological fairness into consideration, like FCGE, which also validates the effectiveness of FairAC. FairGNN also has good performance on fairness, because it adopts a discriminator to deal with the unfairness issue. Our method performs better than FairGNN in most cases. For example, our FairAC method can significantly improve the performance in terms of the fairness metric ∆SP +∆EO, i.e., 65%, 87%, and 67% improvement over FairGNN on the NBA, pokec-z, pokec-n datasets, respectively. Overall, the results in Table 1 validate the effectiveness of FairAC in mitigating unfairness issues.
4.3 ABLATION STUDIES
Attribute missing rate In our proposed framework, the attribute missing rate indicates the integrity of node attribute matrix, which has a great impact on model performance. Here we investigate the performance of our FairAC method and baselines on dealing with graphs with varying degrees of missing attributes. In particular, we set the attribute missing rate to 0.1, 0.3, 0.5 and 0.8, and evaluate FairAC and baselines on the pokec-z dataset. The detailed results are presented in Table 2. From the table, we have the following observation that with varying values of α, FairAC is able to maintain its high fairness performance. Especially when α reaches 0.8, FairAC can greatly outperform other methods. It proves that FairAC is effective even if the attributes are largely missing.
The effectiveness of adversarial learning A key module in FairAC is adversarial learning, which is used to mitigate feature unfairness and topological unfairness. To investigate the contribution of adversarial learning in FairAC, we implement a BaseAC model, which only has the attention-based attribute completion module, but does not contain the adversarial learning loss terms. Comparing BaseAC with FairAC in Table 2, we can find that the fairness performance drops desperately when the adversarial training loss is removed. Since BaseAC does not have an adversarial discriminator to regulate feature encoder as well as attribute completion parameters, it is unable to mitigate unfairness. Overall, the results confirm the effectiveness of the adversarial learning module.
Parameter analysis We investigate how the hyperparameters affect the performance of FairAC. The most important hyperparameter in FairAC is β, which adjusts the trade-off between fairness and attribute completion. We report the results with different hyperparameter values. We set β to 0.2, 0.4, 0.7, 0.8 and 0 that is equivalent to the BaseAC. We also fix other hyperparameters by setting α to 0.3. As shown in Figure 2, we can find that, as β increases, the fairness performance improves while the accuracy of node classification slightly declined. Therefore, it validates our assumption that there is a tradeoff between fairness and attribute completion, and our FairAC is able to enhance fairness without compromising too much on accuracy.
5 CONCLUSIONS
In this paper, we presented a novel problem, i.e., fair attribute completion on graphs with missing attributes. To address this problem, we proposed the FairAC framework, which jointly completes the missing features and mitigates unfairness. FairAC leverages the attention mechanism to complete missing attributes and adopts a sensitive classifier to mitigate implicit feature unfairness as well as topological unfairness on graphs. Experimental results on three real-world datasets demonstrate the superiority of the proposed FairAC framework over baselines in terms of both node classification performance and fairness performance. As a generic fair graph attributes completion approach, FairAC can also be used in other graph-based downstream tasks, such as link prediction, graph regression, pagerank, and clustering.
ACKNOWLEDGEMENT
This research is supported by the Cisco Faculty Award and Adobe Data Science Research Award.
A APPENDIX
A.1 DATASETS AND SETTINGS
Datasets. In the experiments, we use three public graph datasets, NBA, Pokec-z, and Pokec-n. The detailed explanation is shown in supplementary materials. The NBA dataset (Dai & Wang, 2021) is extended from a Kaggle dataset containing around 400 NBA basketball players. It provides the performance statistics of those players in the 2016-2017 season and their personal profiles, e.g., nationality, age, and salary. Their relationships are obtained from Twitter. We use their nationality, whether one is U.S. player or oversea player, as the sensitive attribute. The node label is binary, indicating whether the salary of the player is over median or not. Pokec (Takac & Zabovsky, 2012) is an online social network in Slovakia, which contains millions of anonymized data of users. It has a variety of attributes, such as gender, age, education, region, etc. Based on the region where users belong to, (Dai & Wang, 2021) sampled two datasets named as: Pokec-z and Pokec-n. In our experiments, we consider the region or gender as sensitive attribute, and working field as label for node classification. The statistics of three datasets are summarized in supplementary materials. The statistics of three datasets are summarized in Table 3.
Baselines. We compare our FairAC method with the following baseline methods:
• GCN (Kipf & Welling, 2016) with average attribute completion. GCN is a classical graph neural network model, which has obtained very promising performance in numerous applications. The standard GCN cannot handle graphs with missing attributes. In the experiments, we use the average attribute completion strategy to preprocess the feature matrix, by using the averaged attributes of one’s neighbors to approximate the missing attributes. After average attribute completion, GCN takes the graph with completed feature matrix as inputs to learn node embeddings and predict node labels.
• ALFR (Edwards & Storkey, 2015) with full attributes. This is a pre-processing method. It utilize a discriminator to remove the sensitive feature information in feature embeddings produced by an Autoencoder. Since this method need full sensitive attributes and full features, we give them complete information. In other words, the missing rate α is set to 0.
• ALFR-e with full attributes. Based on ALFR, ALFR-e utilize the topological information. It concatenates the feature embeddings produced by ALFR with topological embeddings learned by DeepWalk (Perozzi et al., 2014). It also relys on complete information.
• Debias (Zhang et al., 2018) with full attributes. This is an in-processing method. It applies a discriminator on node classifier in order to make the probability distribution be the same w.r.t. sensitive attribute. Since the discriminator needs the full sensitive attributes, we provide full node features.
• Debias-e with full attributes. Similar to ALFR-e. It also concatenates the topological embeddings learned by DeepWalk (Perozzi et al., 2014) with feature embeddings learned by Debias.
• FCGE (Bose & Hamilton, 2019) with full attributes. It learns fair node embeddings in graph without node features through edge prediction only. An discriminator is also applied to mitigate sensitive information in topological perspective.
• FairGNN (Dai & Wang, 2021) with average attribute completion. Although FairGNN trains a sensitive attribute discriminator as an adversarial regularizer to enhance the fairness
Implementation Details. Each dataset is randomly split into 75%/25% training/test set as (Dai & Wang, 2021). Besides, we randomly drop node attributes based on the attribute missing rate, α, which means the attributes of α × |V| nodes will be unavailable. For each datasets, we choose a specific attribute as the sensitive attribute. In particular, region, and nation are selected as the sensitive attribute for the pokec, and nba datasets, respectively. Unless otherwise specified, we generate 128-dimension node embeddings and set the attribute missing rate α to 0.3, and set the hyperparameters of FairAC as: β = 1 for pokec-z and nba datasets, and β = 0.5 for pokec-n dataset. We adopt Adam (Kingma & Ba, 2014) with the learning rate of 0.001 and weight decay as 1e − 5. We adopt the DeepWalk (Perozzi et al., 2014) method to generate topological embedding for each node. Specifically, we use the DeepWalk implementation provided by the Karate Club library (Rozemberczki et al., 2020). We set walk length as 100, embedding dimension as 64, window size as 5, and epochs as 10. To evaluate fairness of compared methods, we follow the widely used evaluation protocol in fair graph learning and set a threshold for accuracy, because there is a trade-off between accuracy and fairness. Since we mainly focus on the fairness metric, we set the accuracy threshold that all methods can satisfy. we evaluated our models three times and calculated the mean and standard deviation(std). We estimate the std of ∆SP +∆EO by adding std of ∆SP and ∆EO, because for some methods, we use the reported data from (Dai & Wang, 2021) which does not provide the metric.
A.2 ADDITIONAL EXPERIMENTS
Evaluations on GAT (Veličković et al., 2018) model. As discussed in the main paper, the proposed FairAC method can be easily integrated with existing graph neural networks. Extensive results in Section 4 of the main paper demonstrate that the combination of FairAC and GCN performs very well. In this section, we integrate FairAC with another representative graph neural network model, GAT (Veličković et al., 2018). The results of our method and two main baselines in terms of the node classification accuracy and fairness metrics are shown in Table 4. In these experiments, FairAC generates fair and complete node features, and then GAT is trained for node classification. We also investigate the performance of our FairAC method and baselines on dealing with graphs with varying degrees of missing attributes. We set the attribute missing rate to 0.1, 0.3, 0.5 and 0.7, and evaluate FairAC and baselines on the Pokec-n dataset. In addition, we set β to 1.0. The best results are shown in bold. Generally speaking, we have the following observations. (1). The proposed method FairAC shows comparable classification performance with two baselines, GAT and FairGNN. This suggests that our attribute completion method is able to work well under different downstream models. It further demonstrates that FairAC can preserve useful information implied in the original attributes. (2). FairAC has comparable results with two baselines regarding fairness
metrics. Especially when α is greater than 0.3, FairAC can greatly outperform other methods, which proves that FairAC is effective even if the attributes are largely missing. Overall, the results in Table 4 validate the effectiveness of FairAC in mitigating unfairness issues and show the compatibility with varying downstream models. | 1. What is the focus and contribution of the paper on fair graph learning?
2. What are the strengths of the proposed approach, particularly in addressing the challenge of missing attributes?
3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
4. Are there any concerns or weaknesses in the paper, especially regarding its claims and comparisons with other works?
5. Do you have any questions regarding the paper's methodology, experiments, or conclusions? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
In practice, due to the lack of data or privacy issues, the attributes of some nodes may not be accessible, which makes fair graph learning more challenging. This paper proposes a fair attribute completion method FairAC to supplement the missing information, learn the fair node embedding of the missing attribute graph, and adopt a attention mechanism to deal with the attribute missing problem to improve their performance. The author claims that FiairAC is the first method to jointly solve the problem of graph attribute completion and graph unfairness. The experimental results show that this method has lower sacrifice and better fairness in fair learning.
Strengths And Weaknesses
The author claims that FiairAC is the first method to jointly solve the problem of graph attribute completion and graph unfairness.
Clarity, Quality, Novelty And Reproducibility
This paper focuses on tackling unfairness in graph learning models with fair novelty and quality. |
ICLR | Title
Fair Attribute Completion on Graph with Missing Attributes
Abstract
Tackling unfairness in graph learning models is a challenging task, as the unfairness issues on graphs involve both attributes and topological structures. Existing work on fair graph learning simply assumes that attributes of all nodes are available for model training and then makes fair predictions. In practice, however, the attributes of some nodes might not be accessible due to missing data or privacy concerns, which makes fair graph learning even more challenging. In this paper, we propose FairAC, a fair attribute completion method, to complement missing information and learn fair node embeddings for graphs with missing attributes. FairAC adopts an attention mechanism to deal with the attribute missing problem and meanwhile, it mitigates two types of unfairness, i.e., feature unfairness from attributes and topological unfairness due to attribute completion. FairAC can work on various types of homogeneous graphs and generate fair embeddings for them and thus can be applied to most downstream tasks to improve their fairness performance. To our best knowledge, FairAC is the first method that jointly addresses the graph attribution completion and graph unfairness problems. Experimental results on benchmark datasets show that our method achieves better fairness performance with less sacrifice in accuracy, compared with the state-of-the-art methods of fair graph learning. Code is available at: https://github.com/donglgcn/FairAC.
1 INTRODUCTION
Graphs, such as social networks, biomedical networks, and traffic networks, are commonly observed in many real-world applications. A lot of graph-based machine learning methods have been proposed in the past decades, and they have shown promising performance in tasks like node similarity measurement, node classification, graph regression, and community detection. In recent years, graph neural networks (GNNs) have been actively studied (Scarselli et al., 2008; Wu et al., 2020; Jiang et al., 2019; 2020; Zhu et al., 2021c;b;a; Hua et al., 2020; Chu et al., 2021), which can model graphs with high-dimensional attributes in the non-Euclidean space and have achieved great success in many areas such as recommender systems (Sheu et al., 2021). However, it has been observed that many graphs are biased, and thus GNNs trained on the biased graphs may be unfair with respect to certain sensitive attributes such as demographic groups. For example, in a social network, if the users with the same gender have more active connections, the GNNs tend to pay more attention to such gender information and lead to gender bias by recommending more friends to a user with the same gender identity while ignoring other attributes like interests. And from the data privacy perspective, it is possible to infer one’s sensitive information from the results given by GNNs (Sun et al., 2018). In a time when GNNs are widely deployed in the real world, this severe unfairness is unacceptable. Thus, fairness in graph learning emerges and becomes notable very recently.
Existing work on fair graph learning mainly focuses on the pre-processing, in-processing, and postprocessing steps in the graph learning pipeline in order to mitigate the unfairness issues. The preprocessing approaches modify the original data to conceal sensitive attributes. Fairwalk (Rahman et al., 2019) is a representative pre-processing method, which enforces each group of neighboring nodes an equal chance to be chosen in the sampling process. In many in-processing methods, the most popular way is to add a sensitive discriminator as a constraint, in order to filter out sensitive information from original data. For example, FairGNN (Dai & Wang, 2021) adopts a sensitive
classifier to filter node embeddings. CFC (Bose & Hamilton, 2019) directly adds a filter layer to deal with unfairness issues. The post-processing methods directly force the final prediction to satisfy fairness constraints, such as (Hardt et al., 2016).
When the graphs have complete node attributes, existing fair graph learning methods could obtain promising performance on both fairness and accuracy. However, in practice, graphs may contain nodes whose attributes are entirely missing due to various reasons (e.g., newly added nodes, and data privacy concerns). Taking social networks as an example, a newly registered user may have incomplete profiles. Given such incomplete graphs, existing fair graph learning methods would fail, as they assume all the nodes have attributes for model training. Although FairGNN (Dai & Wang, 2021) also involves the missing attribute problem, it only assumes that a part of the sensitive attributes are missing. To the best of our knowledge, addressing the unfairness issue on graphs with some nodes whose attributes are entirely missing has not been investigated before. Another relevant topic is graph attribute completion (Jin et al., 2021; Chen et al., 2020). It mainly focuses on completing a precise graph but ignores the unfairness issues. In this work, we aim to jointly complete a graph with missing attributes and mitigate unfairness at both feature and topology levels.
In this paper, we study the new problem of learning fair embeddings for graphs with missing attributes. Specifically, we aim to address two major challenges: (1) how to obtain meaningful node embeddings for graphs with missing attributes, and (2) how to enhance fairness of node embeddings with respect to sensitive attributes. To address these two challenges, we propose a Fair Attribute Completion (FairAC) framework. For the first challenge, we adopt an autoencoder to obtain feature embeddings for nodes with attributes and meanwhile we adopt an attention mechanism to aggregate feature information of nodes with missing attributes from their direct neighbors. Then, we address the second challenge by mitigating two types of unfairness, i.e., feature unfairness and topological unfairness. We adopt a sensitive discriminator to regulate embeddings and create a bias-free graph.
The main contributions of this paper are as follows: (1) We present a new problem of achieving fairness on a graph with missing attributes. Different from the existing work, we assume that the attributes of some nodes are entirely missing. (2) We propose a new framework, FairAC, for fair graph attribute completion, which jointly addresses unfairness issues from the feature and topology perspectives. (3) FairAC is a generic approach to complete fair graph attributes, and thus can be used in many graph-based downstream tasks. (4) Extensive experiments on benchmark datasets demonstrate the effectiveness of FairAC in eliminating unfairness and maintaining comparable accuracy.
2 RELATED WORK
2.1 FAIRNESS IN GRAPH LEARNING
Recent work promotes fairness in graph-based machine learning (Bose & Hamilton, 2019; Rahman et al., 2019; Dai & Wang, 2021; Wang et al., 2022). They can be roughly divided into three categories, i.e., the pre-processing methods, in-processing methods, and post-processing methods.
The pre-processing methods are applied before training downstream tasks by modifying training data. For instance, Fairwalk (Rahman et al., 2019) improves the sampling procedure of node2vec (Grover & Leskovec, 2016). Our FairAC framework can be viewed as a pre-processing method, as it seeks to complete node attributes and use them as input of graph neural networks. However, our problem is much harder than existing problems, because the attributes of some nodes in the graph are entirely missing, including both the sensitive ones and non-sensitive ones. Given an input graph with missing attributes, FairAC generates fair and complete feature embeddings and thus can be applied to many downstream tasks, such as node classification, link prediction (LibenNowell & Kleinberg, 2007; Taskar et al., 2003), PageRank (Haveliwala, 2003), etc. Graph learning models trained on the refined feature embeddings would make fair predictions in downstream tasks.
There are plenty of fair graph learning methods as in-processing solutions. Some work focus on dealing with unfairness issues on graphs with complete features. For example, GEAR (Ma et al., 2022) mitigates graph unfairness by counterfactual graph augmentation and an adversarial learning method to learn sensitive-invariant embeddings. However, in order to generate counterfactual subgraphs, they need precise and entire features for every node. In other words, it cannot work well if it encounters a graph with full missing nodes since it cannot generate counterfactual subgraph based
on a blank node. But we can deal with the situation. The most related work is FairGNN (Dai & Wang, 2021). Different from the majority of problem settings on graph fairness. It learns fair GNNs for node classification in a graph where only a limited number of nodes are provided with sensitive attributes. FairGNN adopts a sensitive classifier to predict the missing sensitive labels. After that, it employs a classic adversarial model to mitigate unfairness.Specifically, a sensitive discriminator aims to predict the known or estimated sensitive attributes, while a GNN model tries to fool the sensitive discriminator and meanwhile predicts node labels. However, it cannot predict sensitive information if a node misses all features in the first place and thus will fail to achieve its final goal. Our FairAC can get rid of the problem because we recover the node embeddings from their neighbors. FairAC learns attention between neighbors according to existing full attribute nodes, so we can recover the node embeddings for missing nodes from their neighbors by aggregating the embeddings of neighbors. With the help of the adversarial learning method, it can also remove sensitive information. In addition to attribute completion, we have also designed novel de-biasing strategies to mitigate feature unfairness and topological unfairness.
2.2 ATTRIBUTION COMPLETION ON GRAPHS
The problem of missing attributes is ubiquitous in reality. Several methods (Liao et al., 2016; You et al., 2020; Chen et al., 2020; He et al., 2022; Jin et al., 2021; 2022; Tu et al., 2022; Taguchi et al., 2021) have been proposed to address this problem. GRAPE (You et al., 2020) tackles the problem of missing attributes in tabular data using a graph-based approach. SAT (Chen et al., 2020) assumes that the topology representation and attributes share a common latent space, and thus the missing attributes can be recovered by aligning the paired latent space. He et al. (2022) and Jin et al. (2021) extend such problem settings to heterogeneous graphs. HGNN-AC (Jin et al., 2021) is an end-to-end model, which does not recover the original attributes but generates attribute representations that have sufficient information for the final prediction task. It is worth noting that existing methods on graph attribute completion only focus on the attribute completion accuracy or performance of downstream tasks, but none of them takes fairness into consideration. Instead, our work pays attention to the unfairness issue in graph learning, and we aim to generate fair feature embeddings for each node by attribute completion, which contain the majority of information inherited from original attributes but disentangle the sensitive information.
3 METHODOLOGY
3.1 PROBLEM DEFINITION
Let G = (V, E ,X ) denote an undirected graph, where V = {v1, v2, ..., vN} is the set of N nodes, E ⊆ V × V is the set of undirected edges in the graph, X ∈ RN×D is the node attribute matrix, and D is the dimension of attributes. A ∈ RN×N is the adjacency matrix of the graph G, where Aij = 1 if nodes vi and vj are connected; otherwise, Aij = 0. In addition, S = {s1, s2, ..., sN}
Algorithm 1 FairAC framework algorithm Input: G = (V, E ,X ), S Output: Autoencoder fAE , Sensitive classifier Cs, Attribute completion fAC
1: Obtain topological embedding T with DeepWalk 2: repeat 3: Obtain the feature embeddings H with fAE 4: Optimize the Cs by Equation 6 5: Optimize fAE to mitigate feature unfairness by loss LF 6: Divide V+ into Vkeep and Vdrop based on α 7: Obtain the feature embeddings of nodes with missing attributes Vdrop by fAC 8: Optimize fAC to achieve attribute completion by loss LC 9: Optimize fAC to mitigate topological unfairness by loss LT
10: until convergence 11: return fAE , Cs, fAC
denotes a set of sensitive attributes (e.g., age or gender) of N nodes, and Y = {y1, y2, ..., yN} denotes the node labels. The goal of fair graph learning is to make fair predictions of node labels with respect to the sensitive attribute, which is usually measured by certain fairness notations like statistical parity (Dwork et al., 2012) and equal opportunity (Hardt et al., 2016). Statistical Parity and Equal Opportunity are two group fairness definitions. Their detailed formulations are presented below. The label y denotes the ground-truth node label, and the sensitive attribute s indicates one’s sensitive group. For example, for binary node classification task, y only has two labels. Here we consider two sensitive groups, i.e. s ∈ {0, 1}.
• Statistical Parity (Dwork et al., 2012). It refers to the equal acceptance rate, which can be formulated as:
P (ŷ|s = 0) = P (ŷ|s = 1), (1) where P (·) denotes the probability that · occurs.
• Equal Opportunity (Hardt et al., 2016). It means the probability of a node in a positive class being classified as a positive outcome should be equal for both sensitive group nodes. It mathematically requires an equal true positive rate for each subgroup.
P (ŷ = 1|y = 1, s = 0) = P (ŷ = 1|y = 1, s = 1). (2)
In this work, we mainly focus on addressing unfairness issues on graphs with missing attributes, i.e., attributes of some nodes are totally missing. Let V+ denote the set of nodes whose attributes are available, and V− denote the set of nodes whose attributes are missing, V = {V+,V−}. If vi ∈ V−, both Xi and si are unavailable during model training. With the notations given below, the fair attribute completion problem is formally defined as:
Problem 1. Given a graph G = (V, E ,X ), where node set V+ ∈ V with the corresponding attributes available and the corresponding sensitive attributes in S, learn a fair attribute completion model to generate fair feature embeddings H for each node in V , i.e.,
f(G, S) → H, (3)
where f is the function we aim to learn. H should exclude any sensitive information while preserve non-sensitive information.
3.2 FAIR ATTRIBUTE COMPLETION (FAIRAC) FRAMEWORK
We propose a fair attribute completion (FairAC) framework to address Problem 1. Existing fair graph learning methods tackle unfairness issues by training fair graph neural networks in an endto-end fashion, but they cannot effectively handle graphs that are severely biased due to missing attributes. Our FairAC framework, as a data-centric approach, deals with the unfairness issue from a new perspective, by explicitly debiasing the graph with feature unfairness mitigation and fairnessaware attribute completion. Eventually, FairAC generates fair embeddings for all nodes including the ones without any attributes. The training algorithms are shown in Algorithm 1.
To train the graph attribute completion model, we follow the setting in (Jin et al., 2021) and divide the nodes with attributes (i.e., V+) into two sets: Vkeep and Vdrop. For nodes in Vkeep, we keep their attributes, while for nodes in Vdrop, we temporally drop their attributes and try to recover them using our attribute completion model. Although the nodes are randomly assigned to Vkeep and Vdrop, the proportion of Vdrop is consistent with the attribute missing rate α of graph G, i.e., α = |V
−| |V| = |Vdrop| |V+| .
Different from existing work on fair graph learning, we consider unfairness from two sources. The first one is from node features. For example, we can roughly infer one’s sensitive information, like gender, from some non-sensitive attributes like hobbies. It means that non-sensitive attributes may imply sensitive attributes and thus lead to unfairness in model prediction. We adopt a sensitive discriminator to mitigate feature unfairness. The other source is topological unfairness introduced by graph topological embeddings and node attribute completion. To deal with the topological unfairness, we force the estimated feature embeddings to fool the sensitive discriminator, by updating attention parameters during the attribute completion process.
As illustrated in Figure 1, our FairAC framework first mitigates feature unfairness for nodes with attributes (i.e., Vkeep) by removing sensitive information implicitly contained in non-sensitive attributes with an auto-encoder and sensitive classifier (Section 3.2.1). For nodes without features (i.e., Vdrop), FairAC performs attribute completion with an attention mechanism (Section 3.2.2) and meanwhile mitigates the topological unfairness (Section 3.2.3). Finally, the FairAC model trained on Vkeep and Vdrop can be used to infer fair embeddings for nodes in V−. The overall loss function of FairAC is formulated as:
L = LF + LC + βLT , (4)
where LF represents the loss for mitigating feature unfairness, LC is the loss for attribute completion, and LT is the loss for mitigating topological unfairness. β is a trade-off hyperparameter.
3.2.1 MITIGATING FEATURE UNFAIRNESS
The nodes in Vkeep have full attributes X , while some attributes may implicitly encode information about sensitive attributes S and thus lead to unfair predictions. To address this issue, FairAC aims to encode the attributes X⟩ of node i into a fair feature embedding Hi. Specifically, we use a simple autoencoder framework together with a sensitive classifier. The autoencoder maps Xi into embedding Hi, and meanwhile the sensitive classifier Cs is trained in an adversarial way, such that the embeddings are invariant to sensitive attributes.
Autoencoder. The autoencoder contains an encoder fE and a decoder fD. fE encodes the original attributes Xi to feature embeddings Hi, i.e., Hi = fE(Xi), and fD reconstructs attributes from the latent embeddings, i.e., X̂i = fD(Hi), where the reconstructed attributes X̂ should be close to Xi as possible. The loss function of the autoencoder is written as:
Lae = 1 |Vkeep| ∑
i∈Vkeep|
√ (X̂i −Xi)2. (5)
Sensitive classifier The sensitive classifier Cs is a simple multilayer perceptron (MLP) model. It takes the feature embedding Hi as input and predicts the sensitive attribute ŝi, i.e., ŝi = Cs(Hi). When the sensitive attributes are binary, we can use the binary cross entropy loss to optimize Cs:
LCs = − 1 |Vkeep| ∑
i∈Vkeep
si log ŝi + (1− si) log (1− ŝi). (6)
With the sensitive classifier Cs, we could leverage it to adversarially train the autoencoder, such that fE is able to generate fair feature embeddings that can fool Cs. The loss LF is written as: LF = Lae − βLCs .
3.2.2 COMPLETING NODE EMBEDDINGS VIA ATTENTION MECHANISM
For nodes without attributes (Vdrop), FairAC makes use of topological embeddings and completes the node embeddings Hdrop with an attention mechanism.
Topological embeddings. Recent studies reveal that the topology of graphs has similar semantic information as the attributes (Chen et al., 2020; McPherson et al., 2001; Pei et al., 2020; Zhu et al., 2020). Inspired by this observation, we assume that the nodes’ topological information can reflect the relationship between nodes’ attributes and the attributes of their neighbors. There are a lot of off-the-shelf node topological embedding methods, such as DeepWalk (Perozzi et al., 2014) and node2vec (Grover & Leskovec, 2016). For simplicity, we adopt the DeepWalk method to extract topological embeddings for nodes in V .
Attention mechanism. For graphs with missing attributes, a commonly used strategy is to use average attributes of the one-hop neighbors. This strategy works in some cases, however, simply averaging information from neighbors might be biased, as the results might be dominated by some high-degree nodes. In fact, different neighbors should have varying contributions to the aggregation process in the context of fairness. To this end, FairAC adopts an attention mechanism (Vaswani et al., 2017) to learn the influence of different neighbors or edges with the awareness of fairness, and then aggregates attributes information for nodes in Vdrop. Given a pair of nodes (u, v) which are neighbors, the contribution of node v is the attention attu,v , which is defined as: attu,v = Attention(Tu, Tv), where Tu, Tv are the topological embeddings of nodes u and v, respectively. Specifically, we only focus on the neighbor pairs and ignore those node pairs that are not directly connected. Attention(·, ·) denotes the attention between two topological embeddings, i.e., Attention(Tu, Tv) = σ(TTu WTv), where W is the learnable parametric matrix, and σ is an activation function. After we get all the attention scores between one node and its neighbors, we can get the coefficient of each pair by applying the softmax function:
cu,v = softmax(attu,v) = exp(attu,v)∑
s∈Nu exp(attu,s) , (7)
where cu,v is the coefficient of node pair (u, v), and Nu is the set of neighbors of node u. For node u, FairAC calculates its feature embedding Ĥu by the weighted aggregation with multi-head attention:
Ĥu = 1
K K∑ k=1 ∑ s∈Nu cu,sHs, (8)
where K is the number of attention heads. The loss for attribute completion with topological embedding and attention mechanism is formulated as:
LC = 1 |Vdrop| ∑
i∈Vdrop|
√ (Ĥi −Hi)2. (9)
3.2.3 MITIGATING TOPOLOGICAL UNFAIRNESS
The attribute completion procedure may introduce topological unfairness since we assume that topology information is similar to attributes relation. It is possible that the completed feature embeddings of Vdrop would be unfair with respect to sensitive attributes S. To address this issue, FairAC leverages sensitive classifier Cs to help mitigate topological unfairness by further updating the attention parameter matrix W and thus obtaining fair feature embeddings H. Inspired by (Gong et al., 2020), we expect that the feature embeddings can fool the sensitive classifier Cs to predict the probability distribution close to the uniform distribution over the sensitive category, by minimizing the loss:
LT = − 1 |Vdrop| ∑
i∈Vdrop
si log ŝi + (1− si) log (1− ŝi). (10)
3.3 FAIRAC FOR NODE CLASSIFICATION
The proposed FairAC framework could be viewed as a generic data debiasing approach, which achieves fairness-aware attribute completion and node embedding for graphs with missing attributes. It can be easily integrated with many existing graph neural networks (e.g., GCN (Kipf & Welling, 2016), GAT (Veličković et al., 2018), and GraphSAGE (Hamilton et al., 2017)) for tasks like node classification. In this work, we choose the basic GCN model for node classification and assess how FairAC enhances model performance in terms of accuracy and fairness.
4 EXPERIMENTS
In this section, we evaluate the performance of the proposed FairAC framework on three benchmark datasets in terms of node classification accuracy and fairness w.r.t. sensitive attributes. We compare FairAC with other baseline methods in settings with various sensitive attributes or different attribute missing rates. Ablation studies are also provided and discussed.
4.1 DATASETS AND SETTINGS
Datasets. In the experiments, we use three public graph datasets, NBA, Pokec-z, and Pokec-n. A detailed description is shown in supplementary materials.
Baselines. We compare our FairAC method with the following baseline methods: GCN (Kipf & Welling, 2016), ALFR (Edwards & Storkey, 2015), ALFR-e, Debias (Zhang et al., 2018), Debiase, FCGE (Bose & Hamilton, 2019), and FairGNN (Dai & Wang, 2021). ALFR-e concatenates the feature embeddings produced by ALFR with topological embeddings learned by DeepWalk (Perozzi et al., 2014). Debias-e also concatenates the topological embeddings learned by DeepWalk with feature embeddings learned by Debias. FairGNN is an end-to-end debias method which aims to mitigate unfairness in label prediction task. GCN and FairGNN uses the average attribute completion method, while other baselines use original complete attributes.
Evaluation Metrics. We evaluate the proposed framework with respect to two aspects: classification performance and fairness performance. For classification, we use accuracy and AUC scores.As for fairness, we adopt ∆SP and ∆EO as evaluation metrics, which can be defined as:
∆SP = P (ŷ|s = 0)− P (ŷ|s = 1), (11)
∆EO = P (ŷ = 1|y = 1, s = 0)− P (ŷ = 1|y = 1, s = 1). (12) The smaller ∆SP and ∆EO are, the more fair the model is. In addition, we use ∆SP+∆EO as an overall indicator of a model’s performance on fairness.
4.2 RESULTS AND ANALYSIS
4.2.1 UNFAIRNESS ISSUES IN GRAPH NEURAL NETWORKS
According to the results showed in Table 1, they reveal several unfairness issues in Graph Neural Networks. We divided them into two categories.
• Feature unfairness Feature unfairness is that some non-sensitive attributes could infer sensitive information. Hence, some Graph Neural Networks may learn this relation and make unfair prediction. In most cases, ALFR and Debias and FCGE have better fairness performance than GCN method. It is as expected because the non-sensitive features may contain proxy variables of sensitive attributes which would lead to biased prediction. Thus, ALFR and Debias methods that try to break up these connections are able to mitigate feature unfairness and obtain better fairness performance. These results further prove the existence of feature unfairness.
• Topological unfairness Topological unfairness is sourced from graph structure. In other words, edges in graph, i.e. the misrepresentation due to the connection(Mehrabi et al., 2021) can bring topological unfairness. From the experiments, ALFR-e and Debias-e have worse fairness performance than ALFR and Debias, respectively. It shows that although graph structure can improve the classification performance, it will bring topological unfairness consequently. The worse performance on fairness verifies that topological unfairness exists in GNNs and graph topological information could magnify the discrimination.
4.2.2 EFFECTIVENESS OF FAIRAC ON MITIGATING FEATURE AND TOPOLOGICAL UNFAIRNESS
The results of our FairAC method and baselines in terms of the node classification accuracy and fairness metrics on three datasets are shown in Table 1. The best results are shown in bold. Generally speaking, we have the following observations. (1). The proposed method FairAC shows comparable classification performance with these baselines, GCN and FairGNN. This suggests that our attribute completion method is able to preserve useful information contained in the original attributes. (2).
FairAC outperforms all baselines regarding fairness metrics, especially in ∆SP+∆EO. FairAC outperform baselines that focus on mitigate feature fairness, like ALFR, which proves that FairAC also mitigate topological unfairness. Besides, it is better than those who take topological fairness into consideration, like FCGE, which also validates the effectiveness of FairAC. FairGNN also has good performance on fairness, because it adopts a discriminator to deal with the unfairness issue. Our method performs better than FairGNN in most cases. For example, our FairAC method can significantly improve the performance in terms of the fairness metric ∆SP +∆EO, i.e., 65%, 87%, and 67% improvement over FairGNN on the NBA, pokec-z, pokec-n datasets, respectively. Overall, the results in Table 1 validate the effectiveness of FairAC in mitigating unfairness issues.
4.3 ABLATION STUDIES
Attribute missing rate In our proposed framework, the attribute missing rate indicates the integrity of node attribute matrix, which has a great impact on model performance. Here we investigate the performance of our FairAC method and baselines on dealing with graphs with varying degrees of missing attributes. In particular, we set the attribute missing rate to 0.1, 0.3, 0.5 and 0.8, and evaluate FairAC and baselines on the pokec-z dataset. The detailed results are presented in Table 2. From the table, we have the following observation that with varying values of α, FairAC is able to maintain its high fairness performance. Especially when α reaches 0.8, FairAC can greatly outperform other methods. It proves that FairAC is effective even if the attributes are largely missing.
The effectiveness of adversarial learning A key module in FairAC is adversarial learning, which is used to mitigate feature unfairness and topological unfairness. To investigate the contribution of adversarial learning in FairAC, we implement a BaseAC model, which only has the attention-based attribute completion module, but does not contain the adversarial learning loss terms. Comparing BaseAC with FairAC in Table 2, we can find that the fairness performance drops desperately when the adversarial training loss is removed. Since BaseAC does not have an adversarial discriminator to regulate feature encoder as well as attribute completion parameters, it is unable to mitigate unfairness. Overall, the results confirm the effectiveness of the adversarial learning module.
Parameter analysis We investigate how the hyperparameters affect the performance of FairAC. The most important hyperparameter in FairAC is β, which adjusts the trade-off between fairness and attribute completion. We report the results with different hyperparameter values. We set β to 0.2, 0.4, 0.7, 0.8 and 0 that is equivalent to the BaseAC. We also fix other hyperparameters by setting α to 0.3. As shown in Figure 2, we can find that, as β increases, the fairness performance improves while the accuracy of node classification slightly declined. Therefore, it validates our assumption that there is a tradeoff between fairness and attribute completion, and our FairAC is able to enhance fairness without compromising too much on accuracy.
5 CONCLUSIONS
In this paper, we presented a novel problem, i.e., fair attribute completion on graphs with missing attributes. To address this problem, we proposed the FairAC framework, which jointly completes the missing features and mitigates unfairness. FairAC leverages the attention mechanism to complete missing attributes and adopts a sensitive classifier to mitigate implicit feature unfairness as well as topological unfairness on graphs. Experimental results on three real-world datasets demonstrate the superiority of the proposed FairAC framework over baselines in terms of both node classification performance and fairness performance. As a generic fair graph attributes completion approach, FairAC can also be used in other graph-based downstream tasks, such as link prediction, graph regression, pagerank, and clustering.
ACKNOWLEDGEMENT
This research is supported by the Cisco Faculty Award and Adobe Data Science Research Award.
A APPENDIX
A.1 DATASETS AND SETTINGS
Datasets. In the experiments, we use three public graph datasets, NBA, Pokec-z, and Pokec-n. The detailed explanation is shown in supplementary materials. The NBA dataset (Dai & Wang, 2021) is extended from a Kaggle dataset containing around 400 NBA basketball players. It provides the performance statistics of those players in the 2016-2017 season and their personal profiles, e.g., nationality, age, and salary. Their relationships are obtained from Twitter. We use their nationality, whether one is U.S. player or oversea player, as the sensitive attribute. The node label is binary, indicating whether the salary of the player is over median or not. Pokec (Takac & Zabovsky, 2012) is an online social network in Slovakia, which contains millions of anonymized data of users. It has a variety of attributes, such as gender, age, education, region, etc. Based on the region where users belong to, (Dai & Wang, 2021) sampled two datasets named as: Pokec-z and Pokec-n. In our experiments, we consider the region or gender as sensitive attribute, and working field as label for node classification. The statistics of three datasets are summarized in supplementary materials. The statistics of three datasets are summarized in Table 3.
Baselines. We compare our FairAC method with the following baseline methods:
• GCN (Kipf & Welling, 2016) with average attribute completion. GCN is a classical graph neural network model, which has obtained very promising performance in numerous applications. The standard GCN cannot handle graphs with missing attributes. In the experiments, we use the average attribute completion strategy to preprocess the feature matrix, by using the averaged attributes of one’s neighbors to approximate the missing attributes. After average attribute completion, GCN takes the graph with completed feature matrix as inputs to learn node embeddings and predict node labels.
• ALFR (Edwards & Storkey, 2015) with full attributes. This is a pre-processing method. It utilize a discriminator to remove the sensitive feature information in feature embeddings produced by an Autoencoder. Since this method need full sensitive attributes and full features, we give them complete information. In other words, the missing rate α is set to 0.
• ALFR-e with full attributes. Based on ALFR, ALFR-e utilize the topological information. It concatenates the feature embeddings produced by ALFR with topological embeddings learned by DeepWalk (Perozzi et al., 2014). It also relys on complete information.
• Debias (Zhang et al., 2018) with full attributes. This is an in-processing method. It applies a discriminator on node classifier in order to make the probability distribution be the same w.r.t. sensitive attribute. Since the discriminator needs the full sensitive attributes, we provide full node features.
• Debias-e with full attributes. Similar to ALFR-e. It also concatenates the topological embeddings learned by DeepWalk (Perozzi et al., 2014) with feature embeddings learned by Debias.
• FCGE (Bose & Hamilton, 2019) with full attributes. It learns fair node embeddings in graph without node features through edge prediction only. An discriminator is also applied to mitigate sensitive information in topological perspective.
• FairGNN (Dai & Wang, 2021) with average attribute completion. Although FairGNN trains a sensitive attribute discriminator as an adversarial regularizer to enhance the fairness
Implementation Details. Each dataset is randomly split into 75%/25% training/test set as (Dai & Wang, 2021). Besides, we randomly drop node attributes based on the attribute missing rate, α, which means the attributes of α × |V| nodes will be unavailable. For each datasets, we choose a specific attribute as the sensitive attribute. In particular, region, and nation are selected as the sensitive attribute for the pokec, and nba datasets, respectively. Unless otherwise specified, we generate 128-dimension node embeddings and set the attribute missing rate α to 0.3, and set the hyperparameters of FairAC as: β = 1 for pokec-z and nba datasets, and β = 0.5 for pokec-n dataset. We adopt Adam (Kingma & Ba, 2014) with the learning rate of 0.001 and weight decay as 1e − 5. We adopt the DeepWalk (Perozzi et al., 2014) method to generate topological embedding for each node. Specifically, we use the DeepWalk implementation provided by the Karate Club library (Rozemberczki et al., 2020). We set walk length as 100, embedding dimension as 64, window size as 5, and epochs as 10. To evaluate fairness of compared methods, we follow the widely used evaluation protocol in fair graph learning and set a threshold for accuracy, because there is a trade-off between accuracy and fairness. Since we mainly focus on the fairness metric, we set the accuracy threshold that all methods can satisfy. we evaluated our models three times and calculated the mean and standard deviation(std). We estimate the std of ∆SP +∆EO by adding std of ∆SP and ∆EO, because for some methods, we use the reported data from (Dai & Wang, 2021) which does not provide the metric.
A.2 ADDITIONAL EXPERIMENTS
Evaluations on GAT (Veličković et al., 2018) model. As discussed in the main paper, the proposed FairAC method can be easily integrated with existing graph neural networks. Extensive results in Section 4 of the main paper demonstrate that the combination of FairAC and GCN performs very well. In this section, we integrate FairAC with another representative graph neural network model, GAT (Veličković et al., 2018). The results of our method and two main baselines in terms of the node classification accuracy and fairness metrics are shown in Table 4. In these experiments, FairAC generates fair and complete node features, and then GAT is trained for node classification. We also investigate the performance of our FairAC method and baselines on dealing with graphs with varying degrees of missing attributes. We set the attribute missing rate to 0.1, 0.3, 0.5 and 0.7, and evaluate FairAC and baselines on the Pokec-n dataset. In addition, we set β to 1.0. The best results are shown in bold. Generally speaking, we have the following observations. (1). The proposed method FairAC shows comparable classification performance with two baselines, GAT and FairGNN. This suggests that our attribute completion method is able to work well under different downstream models. It further demonstrates that FairAC can preserve useful information implied in the original attributes. (2). FairAC has comparable results with two baselines regarding fairness
metrics. Especially when α is greater than 0.3, FairAC can greatly outperform other methods, which proves that FairAC is effective even if the attributes are largely missing. Overall, the results in Table 4 validate the effectiveness of FairAC in mitigating unfairness issues and show the compatibility with varying downstream models. | 1. What is the focus of the paper regarding graph attribute completion and unfairness problems?
2. What are the strengths and weaknesses of the proposed method FairAC?
3. Do you have any concerns or questions about the definitions and measurements used in the paper?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper presents a method (FairAC) to jointly addresses the graph attribute completion and graph unfairness problems due to both feature unfairness and topological unfairness due to missing attributes. The model uses a set of
V
k
e
e
p
and
V
d
r
o
p
nodes as in Jin et al. (2001) and uses attribute completion with attention mechanism for nodes without features (this may introduce topological unfairness which is handled by a sensitive classifier) and uses feature unfairness mitigation for nodes with attributes by removing implicit sensitive information that is present in non-sensitive attributes via an autoencoder. Fairness is measured through Statistical Parity and Equal Opportunity. The experiments compare FairAC with several baselines on three datasets. It tests node classification accuracy, fairness w.r.t. sensitive attributes, and effects of sensitive attributes, missing rates, and ablations (attribute missing rate, adversarial training loss relevance, and the tradeoff hyper parameter
β
).
Strengths And Weaknesses
With respect to the strengths, the paper is well organized, the problem is relevant to the ML community, particularly the graph-learning and GNN and network science communities. Clear goals despite some imprecisions in the definitions. Relative parsimonious model that deals with specific sub-goals to achieve the final aim of fairness in completion. There are some areas where the paper could be improved. For instance, there are several imprecisions such as in the abstract where it is stated "FairAC can be applied to any graph and generate fair embeddings”. The evidence provided is solid but stating that it will generate fair embedding to any graph is too optimistic. The definition of matrix
X
is that it has a dimension of attributes
D
yet each of the labels (and their predictions) are defined as a unidimensional
y
∈
0
,
1
. The problem definition is a little hand-wavy. What is a fair feature embedding? The concept of “fair attribute completion model” relies on fair feature embedding and, although it is discussed in other paragraph that it is measured via Statistical Parity and Equal Opportunity, the concept is not formally defined. The feature unfairness description regarding how “some non-sensitive attributes could infer sensitive information” seems not to be measured by Statistical Parity (SP) and Equal Opportunity (EO). The former implies also a privacy concerned while SP and EO deal with fairness of representation. Finally, the statement "removing sensitive information implicitly contained in non-sensitive attributes" has also the implication that sensitive information removed could affect performance when the amount of datapoint from subjects belonging to a minority group is not well represented. How is that scenario handled?
Clarity, Quality, Novelty And Reproducibility
The paper is relatively clear, although there are some areas that need polishing and some concepts (most notably, the risk of inference of sensitive information) require further evaluation. The quality of the ideas are interesting and the model is relatively parsimonious. The novelty is more with respect to the application of existing tools to solve a problem in attribute prediction. |
ICLR | Title
Fair Attribute Completion on Graph with Missing Attributes
Abstract
Tackling unfairness in graph learning models is a challenging task, as the unfairness issues on graphs involve both attributes and topological structures. Existing work on fair graph learning simply assumes that attributes of all nodes are available for model training and then makes fair predictions. In practice, however, the attributes of some nodes might not be accessible due to missing data or privacy concerns, which makes fair graph learning even more challenging. In this paper, we propose FairAC, a fair attribute completion method, to complement missing information and learn fair node embeddings for graphs with missing attributes. FairAC adopts an attention mechanism to deal with the attribute missing problem and meanwhile, it mitigates two types of unfairness, i.e., feature unfairness from attributes and topological unfairness due to attribute completion. FairAC can work on various types of homogeneous graphs and generate fair embeddings for them and thus can be applied to most downstream tasks to improve their fairness performance. To our best knowledge, FairAC is the first method that jointly addresses the graph attribution completion and graph unfairness problems. Experimental results on benchmark datasets show that our method achieves better fairness performance with less sacrifice in accuracy, compared with the state-of-the-art methods of fair graph learning. Code is available at: https://github.com/donglgcn/FairAC.
1 INTRODUCTION
Graphs, such as social networks, biomedical networks, and traffic networks, are commonly observed in many real-world applications. A lot of graph-based machine learning methods have been proposed in the past decades, and they have shown promising performance in tasks like node similarity measurement, node classification, graph regression, and community detection. In recent years, graph neural networks (GNNs) have been actively studied (Scarselli et al., 2008; Wu et al., 2020; Jiang et al., 2019; 2020; Zhu et al., 2021c;b;a; Hua et al., 2020; Chu et al., 2021), which can model graphs with high-dimensional attributes in the non-Euclidean space and have achieved great success in many areas such as recommender systems (Sheu et al., 2021). However, it has been observed that many graphs are biased, and thus GNNs trained on the biased graphs may be unfair with respect to certain sensitive attributes such as demographic groups. For example, in a social network, if the users with the same gender have more active connections, the GNNs tend to pay more attention to such gender information and lead to gender bias by recommending more friends to a user with the same gender identity while ignoring other attributes like interests. And from the data privacy perspective, it is possible to infer one’s sensitive information from the results given by GNNs (Sun et al., 2018). In a time when GNNs are widely deployed in the real world, this severe unfairness is unacceptable. Thus, fairness in graph learning emerges and becomes notable very recently.
Existing work on fair graph learning mainly focuses on the pre-processing, in-processing, and postprocessing steps in the graph learning pipeline in order to mitigate the unfairness issues. The preprocessing approaches modify the original data to conceal sensitive attributes. Fairwalk (Rahman et al., 2019) is a representative pre-processing method, which enforces each group of neighboring nodes an equal chance to be chosen in the sampling process. In many in-processing methods, the most popular way is to add a sensitive discriminator as a constraint, in order to filter out sensitive information from original data. For example, FairGNN (Dai & Wang, 2021) adopts a sensitive
classifier to filter node embeddings. CFC (Bose & Hamilton, 2019) directly adds a filter layer to deal with unfairness issues. The post-processing methods directly force the final prediction to satisfy fairness constraints, such as (Hardt et al., 2016).
When the graphs have complete node attributes, existing fair graph learning methods could obtain promising performance on both fairness and accuracy. However, in practice, graphs may contain nodes whose attributes are entirely missing due to various reasons (e.g., newly added nodes, and data privacy concerns). Taking social networks as an example, a newly registered user may have incomplete profiles. Given such incomplete graphs, existing fair graph learning methods would fail, as they assume all the nodes have attributes for model training. Although FairGNN (Dai & Wang, 2021) also involves the missing attribute problem, it only assumes that a part of the sensitive attributes are missing. To the best of our knowledge, addressing the unfairness issue on graphs with some nodes whose attributes are entirely missing has not been investigated before. Another relevant topic is graph attribute completion (Jin et al., 2021; Chen et al., 2020). It mainly focuses on completing a precise graph but ignores the unfairness issues. In this work, we aim to jointly complete a graph with missing attributes and mitigate unfairness at both feature and topology levels.
In this paper, we study the new problem of learning fair embeddings for graphs with missing attributes. Specifically, we aim to address two major challenges: (1) how to obtain meaningful node embeddings for graphs with missing attributes, and (2) how to enhance fairness of node embeddings with respect to sensitive attributes. To address these two challenges, we propose a Fair Attribute Completion (FairAC) framework. For the first challenge, we adopt an autoencoder to obtain feature embeddings for nodes with attributes and meanwhile we adopt an attention mechanism to aggregate feature information of nodes with missing attributes from their direct neighbors. Then, we address the second challenge by mitigating two types of unfairness, i.e., feature unfairness and topological unfairness. We adopt a sensitive discriminator to regulate embeddings and create a bias-free graph.
The main contributions of this paper are as follows: (1) We present a new problem of achieving fairness on a graph with missing attributes. Different from the existing work, we assume that the attributes of some nodes are entirely missing. (2) We propose a new framework, FairAC, for fair graph attribute completion, which jointly addresses unfairness issues from the feature and topology perspectives. (3) FairAC is a generic approach to complete fair graph attributes, and thus can be used in many graph-based downstream tasks. (4) Extensive experiments on benchmark datasets demonstrate the effectiveness of FairAC in eliminating unfairness and maintaining comparable accuracy.
2 RELATED WORK
2.1 FAIRNESS IN GRAPH LEARNING
Recent work promotes fairness in graph-based machine learning (Bose & Hamilton, 2019; Rahman et al., 2019; Dai & Wang, 2021; Wang et al., 2022). They can be roughly divided into three categories, i.e., the pre-processing methods, in-processing methods, and post-processing methods.
The pre-processing methods are applied before training downstream tasks by modifying training data. For instance, Fairwalk (Rahman et al., 2019) improves the sampling procedure of node2vec (Grover & Leskovec, 2016). Our FairAC framework can be viewed as a pre-processing method, as it seeks to complete node attributes and use them as input of graph neural networks. However, our problem is much harder than existing problems, because the attributes of some nodes in the graph are entirely missing, including both the sensitive ones and non-sensitive ones. Given an input graph with missing attributes, FairAC generates fair and complete feature embeddings and thus can be applied to many downstream tasks, such as node classification, link prediction (LibenNowell & Kleinberg, 2007; Taskar et al., 2003), PageRank (Haveliwala, 2003), etc. Graph learning models trained on the refined feature embeddings would make fair predictions in downstream tasks.
There are plenty of fair graph learning methods as in-processing solutions. Some work focus on dealing with unfairness issues on graphs with complete features. For example, GEAR (Ma et al., 2022) mitigates graph unfairness by counterfactual graph augmentation and an adversarial learning method to learn sensitive-invariant embeddings. However, in order to generate counterfactual subgraphs, they need precise and entire features for every node. In other words, it cannot work well if it encounters a graph with full missing nodes since it cannot generate counterfactual subgraph based
on a blank node. But we can deal with the situation. The most related work is FairGNN (Dai & Wang, 2021). Different from the majority of problem settings on graph fairness. It learns fair GNNs for node classification in a graph where only a limited number of nodes are provided with sensitive attributes. FairGNN adopts a sensitive classifier to predict the missing sensitive labels. After that, it employs a classic adversarial model to mitigate unfairness.Specifically, a sensitive discriminator aims to predict the known or estimated sensitive attributes, while a GNN model tries to fool the sensitive discriminator and meanwhile predicts node labels. However, it cannot predict sensitive information if a node misses all features in the first place and thus will fail to achieve its final goal. Our FairAC can get rid of the problem because we recover the node embeddings from their neighbors. FairAC learns attention between neighbors according to existing full attribute nodes, so we can recover the node embeddings for missing nodes from their neighbors by aggregating the embeddings of neighbors. With the help of the adversarial learning method, it can also remove sensitive information. In addition to attribute completion, we have also designed novel de-biasing strategies to mitigate feature unfairness and topological unfairness.
2.2 ATTRIBUTION COMPLETION ON GRAPHS
The problem of missing attributes is ubiquitous in reality. Several methods (Liao et al., 2016; You et al., 2020; Chen et al., 2020; He et al., 2022; Jin et al., 2021; 2022; Tu et al., 2022; Taguchi et al., 2021) have been proposed to address this problem. GRAPE (You et al., 2020) tackles the problem of missing attributes in tabular data using a graph-based approach. SAT (Chen et al., 2020) assumes that the topology representation and attributes share a common latent space, and thus the missing attributes can be recovered by aligning the paired latent space. He et al. (2022) and Jin et al. (2021) extend such problem settings to heterogeneous graphs. HGNN-AC (Jin et al., 2021) is an end-to-end model, which does not recover the original attributes but generates attribute representations that have sufficient information for the final prediction task. It is worth noting that existing methods on graph attribute completion only focus on the attribute completion accuracy or performance of downstream tasks, but none of them takes fairness into consideration. Instead, our work pays attention to the unfairness issue in graph learning, and we aim to generate fair feature embeddings for each node by attribute completion, which contain the majority of information inherited from original attributes but disentangle the sensitive information.
3 METHODOLOGY
3.1 PROBLEM DEFINITION
Let G = (V, E ,X ) denote an undirected graph, where V = {v1, v2, ..., vN} is the set of N nodes, E ⊆ V × V is the set of undirected edges in the graph, X ∈ RN×D is the node attribute matrix, and D is the dimension of attributes. A ∈ RN×N is the adjacency matrix of the graph G, where Aij = 1 if nodes vi and vj are connected; otherwise, Aij = 0. In addition, S = {s1, s2, ..., sN}
Algorithm 1 FairAC framework algorithm Input: G = (V, E ,X ), S Output: Autoencoder fAE , Sensitive classifier Cs, Attribute completion fAC
1: Obtain topological embedding T with DeepWalk 2: repeat 3: Obtain the feature embeddings H with fAE 4: Optimize the Cs by Equation 6 5: Optimize fAE to mitigate feature unfairness by loss LF 6: Divide V+ into Vkeep and Vdrop based on α 7: Obtain the feature embeddings of nodes with missing attributes Vdrop by fAC 8: Optimize fAC to achieve attribute completion by loss LC 9: Optimize fAC to mitigate topological unfairness by loss LT
10: until convergence 11: return fAE , Cs, fAC
denotes a set of sensitive attributes (e.g., age or gender) of N nodes, and Y = {y1, y2, ..., yN} denotes the node labels. The goal of fair graph learning is to make fair predictions of node labels with respect to the sensitive attribute, which is usually measured by certain fairness notations like statistical parity (Dwork et al., 2012) and equal opportunity (Hardt et al., 2016). Statistical Parity and Equal Opportunity are two group fairness definitions. Their detailed formulations are presented below. The label y denotes the ground-truth node label, and the sensitive attribute s indicates one’s sensitive group. For example, for binary node classification task, y only has two labels. Here we consider two sensitive groups, i.e. s ∈ {0, 1}.
• Statistical Parity (Dwork et al., 2012). It refers to the equal acceptance rate, which can be formulated as:
P (ŷ|s = 0) = P (ŷ|s = 1), (1) where P (·) denotes the probability that · occurs.
• Equal Opportunity (Hardt et al., 2016). It means the probability of a node in a positive class being classified as a positive outcome should be equal for both sensitive group nodes. It mathematically requires an equal true positive rate for each subgroup.
P (ŷ = 1|y = 1, s = 0) = P (ŷ = 1|y = 1, s = 1). (2)
In this work, we mainly focus on addressing unfairness issues on graphs with missing attributes, i.e., attributes of some nodes are totally missing. Let V+ denote the set of nodes whose attributes are available, and V− denote the set of nodes whose attributes are missing, V = {V+,V−}. If vi ∈ V−, both Xi and si are unavailable during model training. With the notations given below, the fair attribute completion problem is formally defined as:
Problem 1. Given a graph G = (V, E ,X ), where node set V+ ∈ V with the corresponding attributes available and the corresponding sensitive attributes in S, learn a fair attribute completion model to generate fair feature embeddings H for each node in V , i.e.,
f(G, S) → H, (3)
where f is the function we aim to learn. H should exclude any sensitive information while preserve non-sensitive information.
3.2 FAIR ATTRIBUTE COMPLETION (FAIRAC) FRAMEWORK
We propose a fair attribute completion (FairAC) framework to address Problem 1. Existing fair graph learning methods tackle unfairness issues by training fair graph neural networks in an endto-end fashion, but they cannot effectively handle graphs that are severely biased due to missing attributes. Our FairAC framework, as a data-centric approach, deals with the unfairness issue from a new perspective, by explicitly debiasing the graph with feature unfairness mitigation and fairnessaware attribute completion. Eventually, FairAC generates fair embeddings for all nodes including the ones without any attributes. The training algorithms are shown in Algorithm 1.
To train the graph attribute completion model, we follow the setting in (Jin et al., 2021) and divide the nodes with attributes (i.e., V+) into two sets: Vkeep and Vdrop. For nodes in Vkeep, we keep their attributes, while for nodes in Vdrop, we temporally drop their attributes and try to recover them using our attribute completion model. Although the nodes are randomly assigned to Vkeep and Vdrop, the proportion of Vdrop is consistent with the attribute missing rate α of graph G, i.e., α = |V
−| |V| = |Vdrop| |V+| .
Different from existing work on fair graph learning, we consider unfairness from two sources. The first one is from node features. For example, we can roughly infer one’s sensitive information, like gender, from some non-sensitive attributes like hobbies. It means that non-sensitive attributes may imply sensitive attributes and thus lead to unfairness in model prediction. We adopt a sensitive discriminator to mitigate feature unfairness. The other source is topological unfairness introduced by graph topological embeddings and node attribute completion. To deal with the topological unfairness, we force the estimated feature embeddings to fool the sensitive discriminator, by updating attention parameters during the attribute completion process.
As illustrated in Figure 1, our FairAC framework first mitigates feature unfairness for nodes with attributes (i.e., Vkeep) by removing sensitive information implicitly contained in non-sensitive attributes with an auto-encoder and sensitive classifier (Section 3.2.1). For nodes without features (i.e., Vdrop), FairAC performs attribute completion with an attention mechanism (Section 3.2.2) and meanwhile mitigates the topological unfairness (Section 3.2.3). Finally, the FairAC model trained on Vkeep and Vdrop can be used to infer fair embeddings for nodes in V−. The overall loss function of FairAC is formulated as:
L = LF + LC + βLT , (4)
where LF represents the loss for mitigating feature unfairness, LC is the loss for attribute completion, and LT is the loss for mitigating topological unfairness. β is a trade-off hyperparameter.
3.2.1 MITIGATING FEATURE UNFAIRNESS
The nodes in Vkeep have full attributes X , while some attributes may implicitly encode information about sensitive attributes S and thus lead to unfair predictions. To address this issue, FairAC aims to encode the attributes X⟩ of node i into a fair feature embedding Hi. Specifically, we use a simple autoencoder framework together with a sensitive classifier. The autoencoder maps Xi into embedding Hi, and meanwhile the sensitive classifier Cs is trained in an adversarial way, such that the embeddings are invariant to sensitive attributes.
Autoencoder. The autoencoder contains an encoder fE and a decoder fD. fE encodes the original attributes Xi to feature embeddings Hi, i.e., Hi = fE(Xi), and fD reconstructs attributes from the latent embeddings, i.e., X̂i = fD(Hi), where the reconstructed attributes X̂ should be close to Xi as possible. The loss function of the autoencoder is written as:
Lae = 1 |Vkeep| ∑
i∈Vkeep|
√ (X̂i −Xi)2. (5)
Sensitive classifier The sensitive classifier Cs is a simple multilayer perceptron (MLP) model. It takes the feature embedding Hi as input and predicts the sensitive attribute ŝi, i.e., ŝi = Cs(Hi). When the sensitive attributes are binary, we can use the binary cross entropy loss to optimize Cs:
LCs = − 1 |Vkeep| ∑
i∈Vkeep
si log ŝi + (1− si) log (1− ŝi). (6)
With the sensitive classifier Cs, we could leverage it to adversarially train the autoencoder, such that fE is able to generate fair feature embeddings that can fool Cs. The loss LF is written as: LF = Lae − βLCs .
3.2.2 COMPLETING NODE EMBEDDINGS VIA ATTENTION MECHANISM
For nodes without attributes (Vdrop), FairAC makes use of topological embeddings and completes the node embeddings Hdrop with an attention mechanism.
Topological embeddings. Recent studies reveal that the topology of graphs has similar semantic information as the attributes (Chen et al., 2020; McPherson et al., 2001; Pei et al., 2020; Zhu et al., 2020). Inspired by this observation, we assume that the nodes’ topological information can reflect the relationship between nodes’ attributes and the attributes of their neighbors. There are a lot of off-the-shelf node topological embedding methods, such as DeepWalk (Perozzi et al., 2014) and node2vec (Grover & Leskovec, 2016). For simplicity, we adopt the DeepWalk method to extract topological embeddings for nodes in V .
Attention mechanism. For graphs with missing attributes, a commonly used strategy is to use average attributes of the one-hop neighbors. This strategy works in some cases, however, simply averaging information from neighbors might be biased, as the results might be dominated by some high-degree nodes. In fact, different neighbors should have varying contributions to the aggregation process in the context of fairness. To this end, FairAC adopts an attention mechanism (Vaswani et al., 2017) to learn the influence of different neighbors or edges with the awareness of fairness, and then aggregates attributes information for nodes in Vdrop. Given a pair of nodes (u, v) which are neighbors, the contribution of node v is the attention attu,v , which is defined as: attu,v = Attention(Tu, Tv), where Tu, Tv are the topological embeddings of nodes u and v, respectively. Specifically, we only focus on the neighbor pairs and ignore those node pairs that are not directly connected. Attention(·, ·) denotes the attention between two topological embeddings, i.e., Attention(Tu, Tv) = σ(TTu WTv), where W is the learnable parametric matrix, and σ is an activation function. After we get all the attention scores between one node and its neighbors, we can get the coefficient of each pair by applying the softmax function:
cu,v = softmax(attu,v) = exp(attu,v)∑
s∈Nu exp(attu,s) , (7)
where cu,v is the coefficient of node pair (u, v), and Nu is the set of neighbors of node u. For node u, FairAC calculates its feature embedding Ĥu by the weighted aggregation with multi-head attention:
Ĥu = 1
K K∑ k=1 ∑ s∈Nu cu,sHs, (8)
where K is the number of attention heads. The loss for attribute completion with topological embedding and attention mechanism is formulated as:
LC = 1 |Vdrop| ∑
i∈Vdrop|
√ (Ĥi −Hi)2. (9)
3.2.3 MITIGATING TOPOLOGICAL UNFAIRNESS
The attribute completion procedure may introduce topological unfairness since we assume that topology information is similar to attributes relation. It is possible that the completed feature embeddings of Vdrop would be unfair with respect to sensitive attributes S. To address this issue, FairAC leverages sensitive classifier Cs to help mitigate topological unfairness by further updating the attention parameter matrix W and thus obtaining fair feature embeddings H. Inspired by (Gong et al., 2020), we expect that the feature embeddings can fool the sensitive classifier Cs to predict the probability distribution close to the uniform distribution over the sensitive category, by minimizing the loss:
LT = − 1 |Vdrop| ∑
i∈Vdrop
si log ŝi + (1− si) log (1− ŝi). (10)
3.3 FAIRAC FOR NODE CLASSIFICATION
The proposed FairAC framework could be viewed as a generic data debiasing approach, which achieves fairness-aware attribute completion and node embedding for graphs with missing attributes. It can be easily integrated with many existing graph neural networks (e.g., GCN (Kipf & Welling, 2016), GAT (Veličković et al., 2018), and GraphSAGE (Hamilton et al., 2017)) for tasks like node classification. In this work, we choose the basic GCN model for node classification and assess how FairAC enhances model performance in terms of accuracy and fairness.
4 EXPERIMENTS
In this section, we evaluate the performance of the proposed FairAC framework on three benchmark datasets in terms of node classification accuracy and fairness w.r.t. sensitive attributes. We compare FairAC with other baseline methods in settings with various sensitive attributes or different attribute missing rates. Ablation studies are also provided and discussed.
4.1 DATASETS AND SETTINGS
Datasets. In the experiments, we use three public graph datasets, NBA, Pokec-z, and Pokec-n. A detailed description is shown in supplementary materials.
Baselines. We compare our FairAC method with the following baseline methods: GCN (Kipf & Welling, 2016), ALFR (Edwards & Storkey, 2015), ALFR-e, Debias (Zhang et al., 2018), Debiase, FCGE (Bose & Hamilton, 2019), and FairGNN (Dai & Wang, 2021). ALFR-e concatenates the feature embeddings produced by ALFR with topological embeddings learned by DeepWalk (Perozzi et al., 2014). Debias-e also concatenates the topological embeddings learned by DeepWalk with feature embeddings learned by Debias. FairGNN is an end-to-end debias method which aims to mitigate unfairness in label prediction task. GCN and FairGNN uses the average attribute completion method, while other baselines use original complete attributes.
Evaluation Metrics. We evaluate the proposed framework with respect to two aspects: classification performance and fairness performance. For classification, we use accuracy and AUC scores.As for fairness, we adopt ∆SP and ∆EO as evaluation metrics, which can be defined as:
∆SP = P (ŷ|s = 0)− P (ŷ|s = 1), (11)
∆EO = P (ŷ = 1|y = 1, s = 0)− P (ŷ = 1|y = 1, s = 1). (12) The smaller ∆SP and ∆EO are, the more fair the model is. In addition, we use ∆SP+∆EO as an overall indicator of a model’s performance on fairness.
4.2 RESULTS AND ANALYSIS
4.2.1 UNFAIRNESS ISSUES IN GRAPH NEURAL NETWORKS
According to the results showed in Table 1, they reveal several unfairness issues in Graph Neural Networks. We divided them into two categories.
• Feature unfairness Feature unfairness is that some non-sensitive attributes could infer sensitive information. Hence, some Graph Neural Networks may learn this relation and make unfair prediction. In most cases, ALFR and Debias and FCGE have better fairness performance than GCN method. It is as expected because the non-sensitive features may contain proxy variables of sensitive attributes which would lead to biased prediction. Thus, ALFR and Debias methods that try to break up these connections are able to mitigate feature unfairness and obtain better fairness performance. These results further prove the existence of feature unfairness.
• Topological unfairness Topological unfairness is sourced from graph structure. In other words, edges in graph, i.e. the misrepresentation due to the connection(Mehrabi et al., 2021) can bring topological unfairness. From the experiments, ALFR-e and Debias-e have worse fairness performance than ALFR and Debias, respectively. It shows that although graph structure can improve the classification performance, it will bring topological unfairness consequently. The worse performance on fairness verifies that topological unfairness exists in GNNs and graph topological information could magnify the discrimination.
4.2.2 EFFECTIVENESS OF FAIRAC ON MITIGATING FEATURE AND TOPOLOGICAL UNFAIRNESS
The results of our FairAC method and baselines in terms of the node classification accuracy and fairness metrics on three datasets are shown in Table 1. The best results are shown in bold. Generally speaking, we have the following observations. (1). The proposed method FairAC shows comparable classification performance with these baselines, GCN and FairGNN. This suggests that our attribute completion method is able to preserve useful information contained in the original attributes. (2).
FairAC outperforms all baselines regarding fairness metrics, especially in ∆SP+∆EO. FairAC outperform baselines that focus on mitigate feature fairness, like ALFR, which proves that FairAC also mitigate topological unfairness. Besides, it is better than those who take topological fairness into consideration, like FCGE, which also validates the effectiveness of FairAC. FairGNN also has good performance on fairness, because it adopts a discriminator to deal with the unfairness issue. Our method performs better than FairGNN in most cases. For example, our FairAC method can significantly improve the performance in terms of the fairness metric ∆SP +∆EO, i.e., 65%, 87%, and 67% improvement over FairGNN on the NBA, pokec-z, pokec-n datasets, respectively. Overall, the results in Table 1 validate the effectiveness of FairAC in mitigating unfairness issues.
4.3 ABLATION STUDIES
Attribute missing rate In our proposed framework, the attribute missing rate indicates the integrity of node attribute matrix, which has a great impact on model performance. Here we investigate the performance of our FairAC method and baselines on dealing with graphs with varying degrees of missing attributes. In particular, we set the attribute missing rate to 0.1, 0.3, 0.5 and 0.8, and evaluate FairAC and baselines on the pokec-z dataset. The detailed results are presented in Table 2. From the table, we have the following observation that with varying values of α, FairAC is able to maintain its high fairness performance. Especially when α reaches 0.8, FairAC can greatly outperform other methods. It proves that FairAC is effective even if the attributes are largely missing.
The effectiveness of adversarial learning A key module in FairAC is adversarial learning, which is used to mitigate feature unfairness and topological unfairness. To investigate the contribution of adversarial learning in FairAC, we implement a BaseAC model, which only has the attention-based attribute completion module, but does not contain the adversarial learning loss terms. Comparing BaseAC with FairAC in Table 2, we can find that the fairness performance drops desperately when the adversarial training loss is removed. Since BaseAC does not have an adversarial discriminator to regulate feature encoder as well as attribute completion parameters, it is unable to mitigate unfairness. Overall, the results confirm the effectiveness of the adversarial learning module.
Parameter analysis We investigate how the hyperparameters affect the performance of FairAC. The most important hyperparameter in FairAC is β, which adjusts the trade-off between fairness and attribute completion. We report the results with different hyperparameter values. We set β to 0.2, 0.4, 0.7, 0.8 and 0 that is equivalent to the BaseAC. We also fix other hyperparameters by setting α to 0.3. As shown in Figure 2, we can find that, as β increases, the fairness performance improves while the accuracy of node classification slightly declined. Therefore, it validates our assumption that there is a tradeoff between fairness and attribute completion, and our FairAC is able to enhance fairness without compromising too much on accuracy.
5 CONCLUSIONS
In this paper, we presented a novel problem, i.e., fair attribute completion on graphs with missing attributes. To address this problem, we proposed the FairAC framework, which jointly completes the missing features and mitigates unfairness. FairAC leverages the attention mechanism to complete missing attributes and adopts a sensitive classifier to mitigate implicit feature unfairness as well as topological unfairness on graphs. Experimental results on three real-world datasets demonstrate the superiority of the proposed FairAC framework over baselines in terms of both node classification performance and fairness performance. As a generic fair graph attributes completion approach, FairAC can also be used in other graph-based downstream tasks, such as link prediction, graph regression, pagerank, and clustering.
ACKNOWLEDGEMENT
This research is supported by the Cisco Faculty Award and Adobe Data Science Research Award.
A APPENDIX
A.1 DATASETS AND SETTINGS
Datasets. In the experiments, we use three public graph datasets, NBA, Pokec-z, and Pokec-n. The detailed explanation is shown in supplementary materials. The NBA dataset (Dai & Wang, 2021) is extended from a Kaggle dataset containing around 400 NBA basketball players. It provides the performance statistics of those players in the 2016-2017 season and their personal profiles, e.g., nationality, age, and salary. Their relationships are obtained from Twitter. We use their nationality, whether one is U.S. player or oversea player, as the sensitive attribute. The node label is binary, indicating whether the salary of the player is over median or not. Pokec (Takac & Zabovsky, 2012) is an online social network in Slovakia, which contains millions of anonymized data of users. It has a variety of attributes, such as gender, age, education, region, etc. Based on the region where users belong to, (Dai & Wang, 2021) sampled two datasets named as: Pokec-z and Pokec-n. In our experiments, we consider the region or gender as sensitive attribute, and working field as label for node classification. The statistics of three datasets are summarized in supplementary materials. The statistics of three datasets are summarized in Table 3.
Baselines. We compare our FairAC method with the following baseline methods:
• GCN (Kipf & Welling, 2016) with average attribute completion. GCN is a classical graph neural network model, which has obtained very promising performance in numerous applications. The standard GCN cannot handle graphs with missing attributes. In the experiments, we use the average attribute completion strategy to preprocess the feature matrix, by using the averaged attributes of one’s neighbors to approximate the missing attributes. After average attribute completion, GCN takes the graph with completed feature matrix as inputs to learn node embeddings and predict node labels.
• ALFR (Edwards & Storkey, 2015) with full attributes. This is a pre-processing method. It utilize a discriminator to remove the sensitive feature information in feature embeddings produced by an Autoencoder. Since this method need full sensitive attributes and full features, we give them complete information. In other words, the missing rate α is set to 0.
• ALFR-e with full attributes. Based on ALFR, ALFR-e utilize the topological information. It concatenates the feature embeddings produced by ALFR with topological embeddings learned by DeepWalk (Perozzi et al., 2014). It also relys on complete information.
• Debias (Zhang et al., 2018) with full attributes. This is an in-processing method. It applies a discriminator on node classifier in order to make the probability distribution be the same w.r.t. sensitive attribute. Since the discriminator needs the full sensitive attributes, we provide full node features.
• Debias-e with full attributes. Similar to ALFR-e. It also concatenates the topological embeddings learned by DeepWalk (Perozzi et al., 2014) with feature embeddings learned by Debias.
• FCGE (Bose & Hamilton, 2019) with full attributes. It learns fair node embeddings in graph without node features through edge prediction only. An discriminator is also applied to mitigate sensitive information in topological perspective.
• FairGNN (Dai & Wang, 2021) with average attribute completion. Although FairGNN trains a sensitive attribute discriminator as an adversarial regularizer to enhance the fairness
Implementation Details. Each dataset is randomly split into 75%/25% training/test set as (Dai & Wang, 2021). Besides, we randomly drop node attributes based on the attribute missing rate, α, which means the attributes of α × |V| nodes will be unavailable. For each datasets, we choose a specific attribute as the sensitive attribute. In particular, region, and nation are selected as the sensitive attribute for the pokec, and nba datasets, respectively. Unless otherwise specified, we generate 128-dimension node embeddings and set the attribute missing rate α to 0.3, and set the hyperparameters of FairAC as: β = 1 for pokec-z and nba datasets, and β = 0.5 for pokec-n dataset. We adopt Adam (Kingma & Ba, 2014) with the learning rate of 0.001 and weight decay as 1e − 5. We adopt the DeepWalk (Perozzi et al., 2014) method to generate topological embedding for each node. Specifically, we use the DeepWalk implementation provided by the Karate Club library (Rozemberczki et al., 2020). We set walk length as 100, embedding dimension as 64, window size as 5, and epochs as 10. To evaluate fairness of compared methods, we follow the widely used evaluation protocol in fair graph learning and set a threshold for accuracy, because there is a trade-off between accuracy and fairness. Since we mainly focus on the fairness metric, we set the accuracy threshold that all methods can satisfy. we evaluated our models three times and calculated the mean and standard deviation(std). We estimate the std of ∆SP +∆EO by adding std of ∆SP and ∆EO, because for some methods, we use the reported data from (Dai & Wang, 2021) which does not provide the metric.
A.2 ADDITIONAL EXPERIMENTS
Evaluations on GAT (Veličković et al., 2018) model. As discussed in the main paper, the proposed FairAC method can be easily integrated with existing graph neural networks. Extensive results in Section 4 of the main paper demonstrate that the combination of FairAC and GCN performs very well. In this section, we integrate FairAC with another representative graph neural network model, GAT (Veličković et al., 2018). The results of our method and two main baselines in terms of the node classification accuracy and fairness metrics are shown in Table 4. In these experiments, FairAC generates fair and complete node features, and then GAT is trained for node classification. We also investigate the performance of our FairAC method and baselines on dealing with graphs with varying degrees of missing attributes. We set the attribute missing rate to 0.1, 0.3, 0.5 and 0.7, and evaluate FairAC and baselines on the Pokec-n dataset. In addition, we set β to 1.0. The best results are shown in bold. Generally speaking, we have the following observations. (1). The proposed method FairAC shows comparable classification performance with two baselines, GAT and FairGNN. This suggests that our attribute completion method is able to work well under different downstream models. It further demonstrates that FairAC can preserve useful information implied in the original attributes. (2). FairAC has comparable results with two baselines regarding fairness
metrics. Especially when α is greater than 0.3, FairAC can greatly outperform other methods, which proves that FairAC is effective even if the attributes are largely missing. Overall, the results in Table 4 validate the effectiveness of FairAC in mitigating unfairness issues and show the compatibility with varying downstream models. | 1. What is the focus and contribution of the paper regarding fair attribute completion on graphs?
2. What are the strengths of the proposed FairAC framework, particularly in addressing unfairness?
3. What are the weaknesses of the paper, especially regarding its comparisons with other works and the missing setting?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The paper considers the problem of fair attribute completion on graphs with missing attributes. The authors proposed the FairAC framework, which jointly completes the missing features and mitigates unfairness. FairAC leverages the attention mechanism to complete missing attributes and adopts a sensitive classifier to mitigate implicit feature unfairness as well as topological unfairness on graphs. Experimental results on three real-world datasets demonstrate the superiority of the proposed FairAC framework over baselines in terms of both node classification performance and fairness performance.
Strengths And Weaknesses
Strength:
The topic of fair attribute completion on graphs with missing attributes is relevant and natural.
The pre-processing idea has the potential to be applied to multiple downstream tasks to improve their fairness performance.
Weaknesses:
The novelty of the problem has not been clarified clearly. The authors mentioned that FairGNN [Dai & Wang (2021)] also involves the missing attribute problem, and claimed that the paper only assumes that a part of the sensitive attributes is missing. However, I don't see many differences between partial missing and full missing for nodes. What makes the full missing setting more challenging? What is the technical difficulty of proposing methods under this setting compared to the partial missing setting? These problems should be discussed more.
The empirical results seem not that powerful to me. The performance of FairAC is quite close to FairGNN, sometimes better and sometimes worse with slight differences.
Clarity, Quality, Novelty And Reproducibility
Clarity: The writing is ok, but needs more comparison with FairGNN. Quality: The empirical results are not that powerful. |
ICLR | Title
Fair Attribute Completion on Graph with Missing Attributes
Abstract
Tackling unfairness in graph learning models is a challenging task, as the unfairness issues on graphs involve both attributes and topological structures. Existing work on fair graph learning simply assumes that attributes of all nodes are available for model training and then makes fair predictions. In practice, however, the attributes of some nodes might not be accessible due to missing data or privacy concerns, which makes fair graph learning even more challenging. In this paper, we propose FairAC, a fair attribute completion method, to complement missing information and learn fair node embeddings for graphs with missing attributes. FairAC adopts an attention mechanism to deal with the attribute missing problem and meanwhile, it mitigates two types of unfairness, i.e., feature unfairness from attributes and topological unfairness due to attribute completion. FairAC can work on various types of homogeneous graphs and generate fair embeddings for them and thus can be applied to most downstream tasks to improve their fairness performance. To our best knowledge, FairAC is the first method that jointly addresses the graph attribution completion and graph unfairness problems. Experimental results on benchmark datasets show that our method achieves better fairness performance with less sacrifice in accuracy, compared with the state-of-the-art methods of fair graph learning. Code is available at: https://github.com/donglgcn/FairAC.
1 INTRODUCTION
Graphs, such as social networks, biomedical networks, and traffic networks, are commonly observed in many real-world applications. A lot of graph-based machine learning methods have been proposed in the past decades, and they have shown promising performance in tasks like node similarity measurement, node classification, graph regression, and community detection. In recent years, graph neural networks (GNNs) have been actively studied (Scarselli et al., 2008; Wu et al., 2020; Jiang et al., 2019; 2020; Zhu et al., 2021c;b;a; Hua et al., 2020; Chu et al., 2021), which can model graphs with high-dimensional attributes in the non-Euclidean space and have achieved great success in many areas such as recommender systems (Sheu et al., 2021). However, it has been observed that many graphs are biased, and thus GNNs trained on the biased graphs may be unfair with respect to certain sensitive attributes such as demographic groups. For example, in a social network, if the users with the same gender have more active connections, the GNNs tend to pay more attention to such gender information and lead to gender bias by recommending more friends to a user with the same gender identity while ignoring other attributes like interests. And from the data privacy perspective, it is possible to infer one’s sensitive information from the results given by GNNs (Sun et al., 2018). In a time when GNNs are widely deployed in the real world, this severe unfairness is unacceptable. Thus, fairness in graph learning emerges and becomes notable very recently.
Existing work on fair graph learning mainly focuses on the pre-processing, in-processing, and postprocessing steps in the graph learning pipeline in order to mitigate the unfairness issues. The preprocessing approaches modify the original data to conceal sensitive attributes. Fairwalk (Rahman et al., 2019) is a representative pre-processing method, which enforces each group of neighboring nodes an equal chance to be chosen in the sampling process. In many in-processing methods, the most popular way is to add a sensitive discriminator as a constraint, in order to filter out sensitive information from original data. For example, FairGNN (Dai & Wang, 2021) adopts a sensitive
classifier to filter node embeddings. CFC (Bose & Hamilton, 2019) directly adds a filter layer to deal with unfairness issues. The post-processing methods directly force the final prediction to satisfy fairness constraints, such as (Hardt et al., 2016).
When the graphs have complete node attributes, existing fair graph learning methods could obtain promising performance on both fairness and accuracy. However, in practice, graphs may contain nodes whose attributes are entirely missing due to various reasons (e.g., newly added nodes, and data privacy concerns). Taking social networks as an example, a newly registered user may have incomplete profiles. Given such incomplete graphs, existing fair graph learning methods would fail, as they assume all the nodes have attributes for model training. Although FairGNN (Dai & Wang, 2021) also involves the missing attribute problem, it only assumes that a part of the sensitive attributes are missing. To the best of our knowledge, addressing the unfairness issue on graphs with some nodes whose attributes are entirely missing has not been investigated before. Another relevant topic is graph attribute completion (Jin et al., 2021; Chen et al., 2020). It mainly focuses on completing a precise graph but ignores the unfairness issues. In this work, we aim to jointly complete a graph with missing attributes and mitigate unfairness at both feature and topology levels.
In this paper, we study the new problem of learning fair embeddings for graphs with missing attributes. Specifically, we aim to address two major challenges: (1) how to obtain meaningful node embeddings for graphs with missing attributes, and (2) how to enhance fairness of node embeddings with respect to sensitive attributes. To address these two challenges, we propose a Fair Attribute Completion (FairAC) framework. For the first challenge, we adopt an autoencoder to obtain feature embeddings for nodes with attributes and meanwhile we adopt an attention mechanism to aggregate feature information of nodes with missing attributes from their direct neighbors. Then, we address the second challenge by mitigating two types of unfairness, i.e., feature unfairness and topological unfairness. We adopt a sensitive discriminator to regulate embeddings and create a bias-free graph.
The main contributions of this paper are as follows: (1) We present a new problem of achieving fairness on a graph with missing attributes. Different from the existing work, we assume that the attributes of some nodes are entirely missing. (2) We propose a new framework, FairAC, for fair graph attribute completion, which jointly addresses unfairness issues from the feature and topology perspectives. (3) FairAC is a generic approach to complete fair graph attributes, and thus can be used in many graph-based downstream tasks. (4) Extensive experiments on benchmark datasets demonstrate the effectiveness of FairAC in eliminating unfairness and maintaining comparable accuracy.
2 RELATED WORK
2.1 FAIRNESS IN GRAPH LEARNING
Recent work promotes fairness in graph-based machine learning (Bose & Hamilton, 2019; Rahman et al., 2019; Dai & Wang, 2021; Wang et al., 2022). They can be roughly divided into three categories, i.e., the pre-processing methods, in-processing methods, and post-processing methods.
The pre-processing methods are applied before training downstream tasks by modifying training data. For instance, Fairwalk (Rahman et al., 2019) improves the sampling procedure of node2vec (Grover & Leskovec, 2016). Our FairAC framework can be viewed as a pre-processing method, as it seeks to complete node attributes and use them as input of graph neural networks. However, our problem is much harder than existing problems, because the attributes of some nodes in the graph are entirely missing, including both the sensitive ones and non-sensitive ones. Given an input graph with missing attributes, FairAC generates fair and complete feature embeddings and thus can be applied to many downstream tasks, such as node classification, link prediction (LibenNowell & Kleinberg, 2007; Taskar et al., 2003), PageRank (Haveliwala, 2003), etc. Graph learning models trained on the refined feature embeddings would make fair predictions in downstream tasks.
There are plenty of fair graph learning methods as in-processing solutions. Some work focus on dealing with unfairness issues on graphs with complete features. For example, GEAR (Ma et al., 2022) mitigates graph unfairness by counterfactual graph augmentation and an adversarial learning method to learn sensitive-invariant embeddings. However, in order to generate counterfactual subgraphs, they need precise and entire features for every node. In other words, it cannot work well if it encounters a graph with full missing nodes since it cannot generate counterfactual subgraph based
on a blank node. But we can deal with the situation. The most related work is FairGNN (Dai & Wang, 2021). Different from the majority of problem settings on graph fairness. It learns fair GNNs for node classification in a graph where only a limited number of nodes are provided with sensitive attributes. FairGNN adopts a sensitive classifier to predict the missing sensitive labels. After that, it employs a classic adversarial model to mitigate unfairness.Specifically, a sensitive discriminator aims to predict the known or estimated sensitive attributes, while a GNN model tries to fool the sensitive discriminator and meanwhile predicts node labels. However, it cannot predict sensitive information if a node misses all features in the first place and thus will fail to achieve its final goal. Our FairAC can get rid of the problem because we recover the node embeddings from their neighbors. FairAC learns attention between neighbors according to existing full attribute nodes, so we can recover the node embeddings for missing nodes from their neighbors by aggregating the embeddings of neighbors. With the help of the adversarial learning method, it can also remove sensitive information. In addition to attribute completion, we have also designed novel de-biasing strategies to mitigate feature unfairness and topological unfairness.
2.2 ATTRIBUTION COMPLETION ON GRAPHS
The problem of missing attributes is ubiquitous in reality. Several methods (Liao et al., 2016; You et al., 2020; Chen et al., 2020; He et al., 2022; Jin et al., 2021; 2022; Tu et al., 2022; Taguchi et al., 2021) have been proposed to address this problem. GRAPE (You et al., 2020) tackles the problem of missing attributes in tabular data using a graph-based approach. SAT (Chen et al., 2020) assumes that the topology representation and attributes share a common latent space, and thus the missing attributes can be recovered by aligning the paired latent space. He et al. (2022) and Jin et al. (2021) extend such problem settings to heterogeneous graphs. HGNN-AC (Jin et al., 2021) is an end-to-end model, which does not recover the original attributes but generates attribute representations that have sufficient information for the final prediction task. It is worth noting that existing methods on graph attribute completion only focus on the attribute completion accuracy or performance of downstream tasks, but none of them takes fairness into consideration. Instead, our work pays attention to the unfairness issue in graph learning, and we aim to generate fair feature embeddings for each node by attribute completion, which contain the majority of information inherited from original attributes but disentangle the sensitive information.
3 METHODOLOGY
3.1 PROBLEM DEFINITION
Let G = (V, E ,X ) denote an undirected graph, where V = {v1, v2, ..., vN} is the set of N nodes, E ⊆ V × V is the set of undirected edges in the graph, X ∈ RN×D is the node attribute matrix, and D is the dimension of attributes. A ∈ RN×N is the adjacency matrix of the graph G, where Aij = 1 if nodes vi and vj are connected; otherwise, Aij = 0. In addition, S = {s1, s2, ..., sN}
Algorithm 1 FairAC framework algorithm Input: G = (V, E ,X ), S Output: Autoencoder fAE , Sensitive classifier Cs, Attribute completion fAC
1: Obtain topological embedding T with DeepWalk 2: repeat 3: Obtain the feature embeddings H with fAE 4: Optimize the Cs by Equation 6 5: Optimize fAE to mitigate feature unfairness by loss LF 6: Divide V+ into Vkeep and Vdrop based on α 7: Obtain the feature embeddings of nodes with missing attributes Vdrop by fAC 8: Optimize fAC to achieve attribute completion by loss LC 9: Optimize fAC to mitigate topological unfairness by loss LT
10: until convergence 11: return fAE , Cs, fAC
denotes a set of sensitive attributes (e.g., age or gender) of N nodes, and Y = {y1, y2, ..., yN} denotes the node labels. The goal of fair graph learning is to make fair predictions of node labels with respect to the sensitive attribute, which is usually measured by certain fairness notations like statistical parity (Dwork et al., 2012) and equal opportunity (Hardt et al., 2016). Statistical Parity and Equal Opportunity are two group fairness definitions. Their detailed formulations are presented below. The label y denotes the ground-truth node label, and the sensitive attribute s indicates one’s sensitive group. For example, for binary node classification task, y only has two labels. Here we consider two sensitive groups, i.e. s ∈ {0, 1}.
• Statistical Parity (Dwork et al., 2012). It refers to the equal acceptance rate, which can be formulated as:
P (ŷ|s = 0) = P (ŷ|s = 1), (1) where P (·) denotes the probability that · occurs.
• Equal Opportunity (Hardt et al., 2016). It means the probability of a node in a positive class being classified as a positive outcome should be equal for both sensitive group nodes. It mathematically requires an equal true positive rate for each subgroup.
P (ŷ = 1|y = 1, s = 0) = P (ŷ = 1|y = 1, s = 1). (2)
In this work, we mainly focus on addressing unfairness issues on graphs with missing attributes, i.e., attributes of some nodes are totally missing. Let V+ denote the set of nodes whose attributes are available, and V− denote the set of nodes whose attributes are missing, V = {V+,V−}. If vi ∈ V−, both Xi and si are unavailable during model training. With the notations given below, the fair attribute completion problem is formally defined as:
Problem 1. Given a graph G = (V, E ,X ), where node set V+ ∈ V with the corresponding attributes available and the corresponding sensitive attributes in S, learn a fair attribute completion model to generate fair feature embeddings H for each node in V , i.e.,
f(G, S) → H, (3)
where f is the function we aim to learn. H should exclude any sensitive information while preserve non-sensitive information.
3.2 FAIR ATTRIBUTE COMPLETION (FAIRAC) FRAMEWORK
We propose a fair attribute completion (FairAC) framework to address Problem 1. Existing fair graph learning methods tackle unfairness issues by training fair graph neural networks in an endto-end fashion, but they cannot effectively handle graphs that are severely biased due to missing attributes. Our FairAC framework, as a data-centric approach, deals with the unfairness issue from a new perspective, by explicitly debiasing the graph with feature unfairness mitigation and fairnessaware attribute completion. Eventually, FairAC generates fair embeddings for all nodes including the ones without any attributes. The training algorithms are shown in Algorithm 1.
To train the graph attribute completion model, we follow the setting in (Jin et al., 2021) and divide the nodes with attributes (i.e., V+) into two sets: Vkeep and Vdrop. For nodes in Vkeep, we keep their attributes, while for nodes in Vdrop, we temporally drop their attributes and try to recover them using our attribute completion model. Although the nodes are randomly assigned to Vkeep and Vdrop, the proportion of Vdrop is consistent with the attribute missing rate α of graph G, i.e., α = |V
−| |V| = |Vdrop| |V+| .
Different from existing work on fair graph learning, we consider unfairness from two sources. The first one is from node features. For example, we can roughly infer one’s sensitive information, like gender, from some non-sensitive attributes like hobbies. It means that non-sensitive attributes may imply sensitive attributes and thus lead to unfairness in model prediction. We adopt a sensitive discriminator to mitigate feature unfairness. The other source is topological unfairness introduced by graph topological embeddings and node attribute completion. To deal with the topological unfairness, we force the estimated feature embeddings to fool the sensitive discriminator, by updating attention parameters during the attribute completion process.
As illustrated in Figure 1, our FairAC framework first mitigates feature unfairness for nodes with attributes (i.e., Vkeep) by removing sensitive information implicitly contained in non-sensitive attributes with an auto-encoder and sensitive classifier (Section 3.2.1). For nodes without features (i.e., Vdrop), FairAC performs attribute completion with an attention mechanism (Section 3.2.2) and meanwhile mitigates the topological unfairness (Section 3.2.3). Finally, the FairAC model trained on Vkeep and Vdrop can be used to infer fair embeddings for nodes in V−. The overall loss function of FairAC is formulated as:
L = LF + LC + βLT , (4)
where LF represents the loss for mitigating feature unfairness, LC is the loss for attribute completion, and LT is the loss for mitigating topological unfairness. β is a trade-off hyperparameter.
3.2.1 MITIGATING FEATURE UNFAIRNESS
The nodes in Vkeep have full attributes X , while some attributes may implicitly encode information about sensitive attributes S and thus lead to unfair predictions. To address this issue, FairAC aims to encode the attributes X⟩ of node i into a fair feature embedding Hi. Specifically, we use a simple autoencoder framework together with a sensitive classifier. The autoencoder maps Xi into embedding Hi, and meanwhile the sensitive classifier Cs is trained in an adversarial way, such that the embeddings are invariant to sensitive attributes.
Autoencoder. The autoencoder contains an encoder fE and a decoder fD. fE encodes the original attributes Xi to feature embeddings Hi, i.e., Hi = fE(Xi), and fD reconstructs attributes from the latent embeddings, i.e., X̂i = fD(Hi), where the reconstructed attributes X̂ should be close to Xi as possible. The loss function of the autoencoder is written as:
Lae = 1 |Vkeep| ∑
i∈Vkeep|
√ (X̂i −Xi)2. (5)
Sensitive classifier The sensitive classifier Cs is a simple multilayer perceptron (MLP) model. It takes the feature embedding Hi as input and predicts the sensitive attribute ŝi, i.e., ŝi = Cs(Hi). When the sensitive attributes are binary, we can use the binary cross entropy loss to optimize Cs:
LCs = − 1 |Vkeep| ∑
i∈Vkeep
si log ŝi + (1− si) log (1− ŝi). (6)
With the sensitive classifier Cs, we could leverage it to adversarially train the autoencoder, such that fE is able to generate fair feature embeddings that can fool Cs. The loss LF is written as: LF = Lae − βLCs .
3.2.2 COMPLETING NODE EMBEDDINGS VIA ATTENTION MECHANISM
For nodes without attributes (Vdrop), FairAC makes use of topological embeddings and completes the node embeddings Hdrop with an attention mechanism.
Topological embeddings. Recent studies reveal that the topology of graphs has similar semantic information as the attributes (Chen et al., 2020; McPherson et al., 2001; Pei et al., 2020; Zhu et al., 2020). Inspired by this observation, we assume that the nodes’ topological information can reflect the relationship between nodes’ attributes and the attributes of their neighbors. There are a lot of off-the-shelf node topological embedding methods, such as DeepWalk (Perozzi et al., 2014) and node2vec (Grover & Leskovec, 2016). For simplicity, we adopt the DeepWalk method to extract topological embeddings for nodes in V .
Attention mechanism. For graphs with missing attributes, a commonly used strategy is to use average attributes of the one-hop neighbors. This strategy works in some cases, however, simply averaging information from neighbors might be biased, as the results might be dominated by some high-degree nodes. In fact, different neighbors should have varying contributions to the aggregation process in the context of fairness. To this end, FairAC adopts an attention mechanism (Vaswani et al., 2017) to learn the influence of different neighbors or edges with the awareness of fairness, and then aggregates attributes information for nodes in Vdrop. Given a pair of nodes (u, v) which are neighbors, the contribution of node v is the attention attu,v , which is defined as: attu,v = Attention(Tu, Tv), where Tu, Tv are the topological embeddings of nodes u and v, respectively. Specifically, we only focus on the neighbor pairs and ignore those node pairs that are not directly connected. Attention(·, ·) denotes the attention between two topological embeddings, i.e., Attention(Tu, Tv) = σ(TTu WTv), where W is the learnable parametric matrix, and σ is an activation function. After we get all the attention scores between one node and its neighbors, we can get the coefficient of each pair by applying the softmax function:
cu,v = softmax(attu,v) = exp(attu,v)∑
s∈Nu exp(attu,s) , (7)
where cu,v is the coefficient of node pair (u, v), and Nu is the set of neighbors of node u. For node u, FairAC calculates its feature embedding Ĥu by the weighted aggregation with multi-head attention:
Ĥu = 1
K K∑ k=1 ∑ s∈Nu cu,sHs, (8)
where K is the number of attention heads. The loss for attribute completion with topological embedding and attention mechanism is formulated as:
LC = 1 |Vdrop| ∑
i∈Vdrop|
√ (Ĥi −Hi)2. (9)
3.2.3 MITIGATING TOPOLOGICAL UNFAIRNESS
The attribute completion procedure may introduce topological unfairness since we assume that topology information is similar to attributes relation. It is possible that the completed feature embeddings of Vdrop would be unfair with respect to sensitive attributes S. To address this issue, FairAC leverages sensitive classifier Cs to help mitigate topological unfairness by further updating the attention parameter matrix W and thus obtaining fair feature embeddings H. Inspired by (Gong et al., 2020), we expect that the feature embeddings can fool the sensitive classifier Cs to predict the probability distribution close to the uniform distribution over the sensitive category, by minimizing the loss:
LT = − 1 |Vdrop| ∑
i∈Vdrop
si log ŝi + (1− si) log (1− ŝi). (10)
3.3 FAIRAC FOR NODE CLASSIFICATION
The proposed FairAC framework could be viewed as a generic data debiasing approach, which achieves fairness-aware attribute completion and node embedding for graphs with missing attributes. It can be easily integrated with many existing graph neural networks (e.g., GCN (Kipf & Welling, 2016), GAT (Veličković et al., 2018), and GraphSAGE (Hamilton et al., 2017)) for tasks like node classification. In this work, we choose the basic GCN model for node classification and assess how FairAC enhances model performance in terms of accuracy and fairness.
4 EXPERIMENTS
In this section, we evaluate the performance of the proposed FairAC framework on three benchmark datasets in terms of node classification accuracy and fairness w.r.t. sensitive attributes. We compare FairAC with other baseline methods in settings with various sensitive attributes or different attribute missing rates. Ablation studies are also provided and discussed.
4.1 DATASETS AND SETTINGS
Datasets. In the experiments, we use three public graph datasets, NBA, Pokec-z, and Pokec-n. A detailed description is shown in supplementary materials.
Baselines. We compare our FairAC method with the following baseline methods: GCN (Kipf & Welling, 2016), ALFR (Edwards & Storkey, 2015), ALFR-e, Debias (Zhang et al., 2018), Debiase, FCGE (Bose & Hamilton, 2019), and FairGNN (Dai & Wang, 2021). ALFR-e concatenates the feature embeddings produced by ALFR with topological embeddings learned by DeepWalk (Perozzi et al., 2014). Debias-e also concatenates the topological embeddings learned by DeepWalk with feature embeddings learned by Debias. FairGNN is an end-to-end debias method which aims to mitigate unfairness in label prediction task. GCN and FairGNN uses the average attribute completion method, while other baselines use original complete attributes.
Evaluation Metrics. We evaluate the proposed framework with respect to two aspects: classification performance and fairness performance. For classification, we use accuracy and AUC scores.As for fairness, we adopt ∆SP and ∆EO as evaluation metrics, which can be defined as:
∆SP = P (ŷ|s = 0)− P (ŷ|s = 1), (11)
∆EO = P (ŷ = 1|y = 1, s = 0)− P (ŷ = 1|y = 1, s = 1). (12) The smaller ∆SP and ∆EO are, the more fair the model is. In addition, we use ∆SP+∆EO as an overall indicator of a model’s performance on fairness.
4.2 RESULTS AND ANALYSIS
4.2.1 UNFAIRNESS ISSUES IN GRAPH NEURAL NETWORKS
According to the results showed in Table 1, they reveal several unfairness issues in Graph Neural Networks. We divided them into two categories.
• Feature unfairness Feature unfairness is that some non-sensitive attributes could infer sensitive information. Hence, some Graph Neural Networks may learn this relation and make unfair prediction. In most cases, ALFR and Debias and FCGE have better fairness performance than GCN method. It is as expected because the non-sensitive features may contain proxy variables of sensitive attributes which would lead to biased prediction. Thus, ALFR and Debias methods that try to break up these connections are able to mitigate feature unfairness and obtain better fairness performance. These results further prove the existence of feature unfairness.
• Topological unfairness Topological unfairness is sourced from graph structure. In other words, edges in graph, i.e. the misrepresentation due to the connection(Mehrabi et al., 2021) can bring topological unfairness. From the experiments, ALFR-e and Debias-e have worse fairness performance than ALFR and Debias, respectively. It shows that although graph structure can improve the classification performance, it will bring topological unfairness consequently. The worse performance on fairness verifies that topological unfairness exists in GNNs and graph topological information could magnify the discrimination.
4.2.2 EFFECTIVENESS OF FAIRAC ON MITIGATING FEATURE AND TOPOLOGICAL UNFAIRNESS
The results of our FairAC method and baselines in terms of the node classification accuracy and fairness metrics on three datasets are shown in Table 1. The best results are shown in bold. Generally speaking, we have the following observations. (1). The proposed method FairAC shows comparable classification performance with these baselines, GCN and FairGNN. This suggests that our attribute completion method is able to preserve useful information contained in the original attributes. (2).
FairAC outperforms all baselines regarding fairness metrics, especially in ∆SP+∆EO. FairAC outperform baselines that focus on mitigate feature fairness, like ALFR, which proves that FairAC also mitigate topological unfairness. Besides, it is better than those who take topological fairness into consideration, like FCGE, which also validates the effectiveness of FairAC. FairGNN also has good performance on fairness, because it adopts a discriminator to deal with the unfairness issue. Our method performs better than FairGNN in most cases. For example, our FairAC method can significantly improve the performance in terms of the fairness metric ∆SP +∆EO, i.e., 65%, 87%, and 67% improvement over FairGNN on the NBA, pokec-z, pokec-n datasets, respectively. Overall, the results in Table 1 validate the effectiveness of FairAC in mitigating unfairness issues.
4.3 ABLATION STUDIES
Attribute missing rate In our proposed framework, the attribute missing rate indicates the integrity of node attribute matrix, which has a great impact on model performance. Here we investigate the performance of our FairAC method and baselines on dealing with graphs with varying degrees of missing attributes. In particular, we set the attribute missing rate to 0.1, 0.3, 0.5 and 0.8, and evaluate FairAC and baselines on the pokec-z dataset. The detailed results are presented in Table 2. From the table, we have the following observation that with varying values of α, FairAC is able to maintain its high fairness performance. Especially when α reaches 0.8, FairAC can greatly outperform other methods. It proves that FairAC is effective even if the attributes are largely missing.
The effectiveness of adversarial learning A key module in FairAC is adversarial learning, which is used to mitigate feature unfairness and topological unfairness. To investigate the contribution of adversarial learning in FairAC, we implement a BaseAC model, which only has the attention-based attribute completion module, but does not contain the adversarial learning loss terms. Comparing BaseAC with FairAC in Table 2, we can find that the fairness performance drops desperately when the adversarial training loss is removed. Since BaseAC does not have an adversarial discriminator to regulate feature encoder as well as attribute completion parameters, it is unable to mitigate unfairness. Overall, the results confirm the effectiveness of the adversarial learning module.
Parameter analysis We investigate how the hyperparameters affect the performance of FairAC. The most important hyperparameter in FairAC is β, which adjusts the trade-off between fairness and attribute completion. We report the results with different hyperparameter values. We set β to 0.2, 0.4, 0.7, 0.8 and 0 that is equivalent to the BaseAC. We also fix other hyperparameters by setting α to 0.3. As shown in Figure 2, we can find that, as β increases, the fairness performance improves while the accuracy of node classification slightly declined. Therefore, it validates our assumption that there is a tradeoff between fairness and attribute completion, and our FairAC is able to enhance fairness without compromising too much on accuracy.
5 CONCLUSIONS
In this paper, we presented a novel problem, i.e., fair attribute completion on graphs with missing attributes. To address this problem, we proposed the FairAC framework, which jointly completes the missing features and mitigates unfairness. FairAC leverages the attention mechanism to complete missing attributes and adopts a sensitive classifier to mitigate implicit feature unfairness as well as topological unfairness on graphs. Experimental results on three real-world datasets demonstrate the superiority of the proposed FairAC framework over baselines in terms of both node classification performance and fairness performance. As a generic fair graph attributes completion approach, FairAC can also be used in other graph-based downstream tasks, such as link prediction, graph regression, pagerank, and clustering.
ACKNOWLEDGEMENT
This research is supported by the Cisco Faculty Award and Adobe Data Science Research Award.
A APPENDIX
A.1 DATASETS AND SETTINGS
Datasets. In the experiments, we use three public graph datasets, NBA, Pokec-z, and Pokec-n. The detailed explanation is shown in supplementary materials. The NBA dataset (Dai & Wang, 2021) is extended from a Kaggle dataset containing around 400 NBA basketball players. It provides the performance statistics of those players in the 2016-2017 season and their personal profiles, e.g., nationality, age, and salary. Their relationships are obtained from Twitter. We use their nationality, whether one is U.S. player or oversea player, as the sensitive attribute. The node label is binary, indicating whether the salary of the player is over median or not. Pokec (Takac & Zabovsky, 2012) is an online social network in Slovakia, which contains millions of anonymized data of users. It has a variety of attributes, such as gender, age, education, region, etc. Based on the region where users belong to, (Dai & Wang, 2021) sampled two datasets named as: Pokec-z and Pokec-n. In our experiments, we consider the region or gender as sensitive attribute, and working field as label for node classification. The statistics of three datasets are summarized in supplementary materials. The statistics of three datasets are summarized in Table 3.
Baselines. We compare our FairAC method with the following baseline methods:
• GCN (Kipf & Welling, 2016) with average attribute completion. GCN is a classical graph neural network model, which has obtained very promising performance in numerous applications. The standard GCN cannot handle graphs with missing attributes. In the experiments, we use the average attribute completion strategy to preprocess the feature matrix, by using the averaged attributes of one’s neighbors to approximate the missing attributes. After average attribute completion, GCN takes the graph with completed feature matrix as inputs to learn node embeddings and predict node labels.
• ALFR (Edwards & Storkey, 2015) with full attributes. This is a pre-processing method. It utilize a discriminator to remove the sensitive feature information in feature embeddings produced by an Autoencoder. Since this method need full sensitive attributes and full features, we give them complete information. In other words, the missing rate α is set to 0.
• ALFR-e with full attributes. Based on ALFR, ALFR-e utilize the topological information. It concatenates the feature embeddings produced by ALFR with topological embeddings learned by DeepWalk (Perozzi et al., 2014). It also relys on complete information.
• Debias (Zhang et al., 2018) with full attributes. This is an in-processing method. It applies a discriminator on node classifier in order to make the probability distribution be the same w.r.t. sensitive attribute. Since the discriminator needs the full sensitive attributes, we provide full node features.
• Debias-e with full attributes. Similar to ALFR-e. It also concatenates the topological embeddings learned by DeepWalk (Perozzi et al., 2014) with feature embeddings learned by Debias.
• FCGE (Bose & Hamilton, 2019) with full attributes. It learns fair node embeddings in graph without node features through edge prediction only. An discriminator is also applied to mitigate sensitive information in topological perspective.
• FairGNN (Dai & Wang, 2021) with average attribute completion. Although FairGNN trains a sensitive attribute discriminator as an adversarial regularizer to enhance the fairness
Implementation Details. Each dataset is randomly split into 75%/25% training/test set as (Dai & Wang, 2021). Besides, we randomly drop node attributes based on the attribute missing rate, α, which means the attributes of α × |V| nodes will be unavailable. For each datasets, we choose a specific attribute as the sensitive attribute. In particular, region, and nation are selected as the sensitive attribute for the pokec, and nba datasets, respectively. Unless otherwise specified, we generate 128-dimension node embeddings and set the attribute missing rate α to 0.3, and set the hyperparameters of FairAC as: β = 1 for pokec-z and nba datasets, and β = 0.5 for pokec-n dataset. We adopt Adam (Kingma & Ba, 2014) with the learning rate of 0.001 and weight decay as 1e − 5. We adopt the DeepWalk (Perozzi et al., 2014) method to generate topological embedding for each node. Specifically, we use the DeepWalk implementation provided by the Karate Club library (Rozemberczki et al., 2020). We set walk length as 100, embedding dimension as 64, window size as 5, and epochs as 10. To evaluate fairness of compared methods, we follow the widely used evaluation protocol in fair graph learning and set a threshold for accuracy, because there is a trade-off between accuracy and fairness. Since we mainly focus on the fairness metric, we set the accuracy threshold that all methods can satisfy. we evaluated our models three times and calculated the mean and standard deviation(std). We estimate the std of ∆SP +∆EO by adding std of ∆SP and ∆EO, because for some methods, we use the reported data from (Dai & Wang, 2021) which does not provide the metric.
A.2 ADDITIONAL EXPERIMENTS
Evaluations on GAT (Veličković et al., 2018) model. As discussed in the main paper, the proposed FairAC method can be easily integrated with existing graph neural networks. Extensive results in Section 4 of the main paper demonstrate that the combination of FairAC and GCN performs very well. In this section, we integrate FairAC with another representative graph neural network model, GAT (Veličković et al., 2018). The results of our method and two main baselines in terms of the node classification accuracy and fairness metrics are shown in Table 4. In these experiments, FairAC generates fair and complete node features, and then GAT is trained for node classification. We also investigate the performance of our FairAC method and baselines on dealing with graphs with varying degrees of missing attributes. We set the attribute missing rate to 0.1, 0.3, 0.5 and 0.7, and evaluate FairAC and baselines on the Pokec-n dataset. In addition, we set β to 1.0. The best results are shown in bold. Generally speaking, we have the following observations. (1). The proposed method FairAC shows comparable classification performance with two baselines, GAT and FairGNN. This suggests that our attribute completion method is able to work well under different downstream models. It further demonstrates that FairAC can preserve useful information implied in the original attributes. (2). FairAC has comparable results with two baselines regarding fairness
metrics. Especially when α is greater than 0.3, FairAC can greatly outperform other methods, which proves that FairAC is effective even if the attributes are largely missing. Overall, the results in Table 4 validate the effectiveness of FairAC in mitigating unfairness issues and show the compatibility with varying downstream models. | 1. What is the focus of the paper regarding graph embedding?
2. What are the strengths and weaknesses of the proposed method in addressing the issue of fairness on attribute-missing graphs?
3. Are there any concerns regarding the technique contributions and their combination in the proposed approach?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Do you have any questions regarding the paper's experiments and ablation studies? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
In this paper, the authors study an important graph embedding issue, i.e., fairness on attribute-missing graphs. They propose a new fair attribute completion method to solve it by considering both feature and topological unfairness. The experiments show comparable results of the proposed method on some benchmarks.
Strengths And Weaknesses
Strengths
The topic is interesting and important for the graph representation learning community.
This work introduces a valuable learning problem, i.e., fairness on a graph with missing attributes. The paper is well-organized and easy to follow.
The idea is new, and the proposed algorithm is rational.
Weaknesses:
Some works [1-4] are not addressed in the related work section nor in the experimental evaluation. In my opinion, the authors should compare their solution with these algorithms at least in the related work section.
[1] Amer: A New Attribute-Missing Network Embedding Approach. TCYB, 2022.
[2] Initializing Then Refining: A Simple Graph Attribute Imputation Network. IJCAI, 2022.
[3] Graph Convolutional Networks for Graphs Containing Missing Features. FGCS, 2021.
[4] Heterogeneous Graph Neural Network via Attribute Completion. WWW, 2021.
Although this paper is well-motivated, the technique contributions seem a bit weak. The used techniques such as attention mechanism, adversarial learning, and auto-encoder have been widely researched in graph domains. I wonder the reason that why combining these components contributes more to the performance.
According to the results from ablation studies, the effectiveness of the proposed components seems unclear. For example, Table 2 can see no difference in the performance of some datasets.
Explain if the metric is for one experiment over randomized trials, e.g., randomized initialization. The results in Table 1 and Table 2 would be stronger if the metric reported were mean/std.
I would discuss the difference and the connection between the proposed model and existing graph fairness methods in theory.
Clarity, Quality, Novelty And Reproducibility
Clarity:Good: The paper is well organized, but the presentation has minor details that could be improved.
Quality:Good: The paper appears to be technically sound. The proofs, if applicable, appear to be correct, but I have not carefully checked the details. The experimental evaluation, if applicable, is adequate, and the results convincingly support the main claims.
Novelty:Fair: The paper contributes some new ideas or represents incremental advances.
Reproducibility: Good: key resources (e.g., proofs, code, data) are available. |
ICLR | Title
Surprise-Based Intrinsic Motivation for Deep Reinforcement Learning
Abstract
Exploration in complex domains is a key challenge in reinforcement learning, especially for tasks with very sparse rewards. Recent successes in deep reinforcement learning have been achieved mostly using simple heuristic exploration strategies such as -greedy action selection or Gaussian control noise, but there are many tasks where these methods are insufficient to make any learning progress. Here, we consider more complex heuristics: efficient and scalable exploration strategies that maximize a notion of an agent’s surprise about its experiences via intrinsic motivation. We propose to learn a model of the MDP transition probabilities concurrently with the policy, and to form intrinsic rewards that approximate the KL-divergence of the true transition probabilities from the learned model. One of our approximations results in using surprisal as intrinsic motivation, while the other gives the k-step learning progress. We show that our incentives enable agents to succeed in a wide range of environments with high-dimensional state spaces and very sparse rewards, including continuous control tasks and games in the Atari RAM domain, outperforming several other heuristic exploration techniques.
1 INTRODUCTION
A reinforcement learning agent uses experiences obtained from interacting with an unknown environment to learn behavior that maximizes a reward signal. The optimality of the learned behavior is strongly dependent on how the agent approaches the exploration/exploitation trade-off in that environment. If it explores poorly or too little, it may never find rewards from which to learn, and its behavior will always remain suboptimal; if it does find rewards but exploits them too intensely, it may wind up prematurely converging to suboptimal behaviors, and fail to discover more rewarding opportunities. Although substantial theoretical work has been done on optimal exploration strategies for environments with finite state and action spaces, we are here concerned with problems that have continuous state and/or action spaces, where algorithms with theoretical guarantees admit no obvious generalization or are prohibitively impractical to implement.
Simple heuristic methods of exploring such as -greedy action selection and Gaussian control noise have been successful on a wide range of tasks, but are inadequate when rewards are especially sparse. For example, the Deep Q-Network approach of Mnih et al. [13] used -greedy exploration in training deep neural networks to play Atari games directly from raw pixels. On many games, the algorithm resulted in superhuman play; however, on games like Montezuma’s Revenge, where rewards are extremely sparse, DQN (and its variants [25], [26], [15], [12]) with -greedy exploration failed to achieve scores even at the level of a novice human. Similarly, in benchmarking deep reinforcement learning for continuous control, Duan et al.[5] found that policy optimization algorithms that explored by acting according to the current stochastic policy, including REINFORCE and Trust Region Policy Optimization (TRPO), could succeed across a diverse slate of simulated robotics control tasks with well-defined, non-sparse reward signals (like rewards proportional to the forward velocity of the robot). Yet, when tested in environments with sparse rewards—where the agent would only be able to attain rewards after first figuring out complex motion primitives without reinforcement—every algorithm failed to attain scores better than random agents. The failure modes in all of these cases pertained to the nature of the exploration: the agents encountered reward signals so infrequently that they were never able to learn reward-seeking behavior.
One approach to encourage better exploration is via intrinsic motivation, where an agent has a task-independent, often information-theoretic intrinsic reward function which it seeks to maximize in addition to the reward from the environment. Examples of intrinsic motivation include empowerment, where the agent enjoys the level of control it has about its future; surprise, where the agent is excited to see outcomes that run contrary to its understanding of the world; and novelty, where the agent is excited to see new states (which is tightly connected to surprise, as shown in [2]). For in-depth reviews of the different types of intrinsic motivation, we direct the reader to [1] and [17].
Recently, several applications of intrinsic motivation to the deep reinforcement learning setting (such as [2], [7], [22]) have found promising success. In this work, we build on that success by exploring scalable measures of surprise for intrinsic motivation in deep reinforcement learning. We formulate surprise as the KL-divergence of the true transition probability distribution from a transition model which is learned concurrently with the policy, and consider two approximations to this divergence which are easy to compute in practice. One of these approximations results in using the surprisal of a transition as an intrinsic reward; the other results in using a measure of learning progress which is closer to a Bayesian concept of surprise. Our contributions are as follows:
1. we investigate surprisal and learning progress as intrinsic rewards across a wide range of environments in the deep reinforcement learning setting, and demonstrate empirically that the incentives (especially surprisal) result in efficient exploration,
2. we evaluate the difficulty of the slate of sparse reward continuous control tasks introduced by Houthooft et al. [7] to benchmark exploration incentives, and introduce a new task to complement the slate,
3. and we present an efficient method for learning the dynamics model (transition probabilities) concurrently with a policy.
We distinguish our work from prior work in a number of implementation details: unlike Bellemare et al. [2], we learn a transition model as opposed to a state-action occupancy density; unlike Stadie et al. [22], our formulation naturally encompasses environments with stochastic dynamics; unlike Houthooft et al. [7], we avoid the overhead of maintaining a distribution over possible dynamics models, and learn a single deep dynamics model.
In our empirical evaluations, we compare the performance of our proposed intrinsic rewards with other heuristic intrinsic reward schemes and to recent results from the literature. In particular, we compare to Variational Information Maximizing Exploration (VIME) [7], a method which approximately maximizes Bayesian surprise and currently achieves state-of-the-art performance on continuous control with sparse rewards. We show that our incentives can perform on the level of VIME at a lower computational cost.
2 PRELIMINARIES
We begin by introducing notation which we will use throughout the paper. A Markov decision process (MDP) is a tuple, (S,A,R, P, µ), where S is the set of states, A is the set of actions, R : S × A × S → R is the reward function, P : S × A × S → [0, 1] is the transition probability function (where P (s′|s, a) is the probability of transitioning to state s′ given that the previous state was s and the agent took action a in s), and µ : S → [0, 1] is the starting state distribution. A policy π : S × A → [0, 1] is a distribution over actions per state, with π(a|s) the probability of selecting a in state s. We aim to select a policy π which maximizes a performance measure, L(π), which usually takes the form of expected finite-horizon total return (sum of rewards in a fixed time period), or expected infinite-horizon discounted total return (discounted sum of all rewards forever). In this paper, we use the finite-horizon total return formulation.
3 SURPRISE INCENTIVES
To train an agent with surprise-based exploration, we alternate between making an update step to a dynamics model (an approximator of the MDP’s transition probability function), and making a policy update step that maximizes a trade-off between policy performance and a surprise measure.
The dynamics model step makes progress on the optimization problem
min φ − 1 |D| ∑ (s,a,s′)∈D logPφ(s ′|s, a) + αf(φ), (1)
where D is is a dataset of transition tuples from the environment, Pφ is the model we are learning, f is a regularization function, and α > 0 is a regularization trade-off coefficient. The policy update step makes progress on an approximation to the optimization problem
max π L(π) + η E s,a∼π
[DKL(P ||Pφ)[s, a]] , (2)
where η > 0 is an explore-exploit trade-off coefficient. The exploration incentive in (2), which we select to be the on-policy average KL-divergence of Pφ from P , is intended to capture the agent’s surprise about its experience. The dynamics model Pφ should only be close to P on regions of the transition state space that the agent has already visited (because those transitions will appear in D and thus the model will be fit to them), and as a result, the KL divergence of Pφ and P will be higher in unfamiliar places. Essentially, this exploits the generalization in the model to encourage the agent to go where it has not gone before. The surprise incentive in (2) gives the net effect of performing a reward shaping of the form
r′(s, a, s′) = r(s, a, s′) + η (logP (s′|s, a)− logPφ(s′|s, a)) , (3) where r(s, a, s′) is the original reward and r′(s, a, s′) is the transformed reward, so ideally we could solve (2) by applying any reinforcement learning algorithm with these reshaped rewards. In practice, we cannot directly implement this reward reshaping because P is unknown. Instead, we consider two ways of finding an approximate solution to (2).
In one method, we approximate the KL-divergence by the cross-entropy, which is reasonable when H(P ) is finite (and small) and Pφ is sufficiently far from P 1; that is, denoting the cross-entropy by H(P, Pφ)[s, a] . = Es′∼P (·|s,a)[− logPφ(s′|s, a)], we assume
DKL(P ||Pφ)[s, a] = H(P, Pφ)[s, a]−H(P )[s, a] ≈ H(P, Pφ)[s, a].
(4)
This approximation results in a reward shaping of the form
r′(s, a, s′) = r(s, a, s′)− η logPφ(s′|s, a); (5) here, the intrinsic reward is the surprisal of s′ given the model Pφ and the context (s, a).
In the other method, we maximize a lower bound on the objective in (2) by lower bounding the surprise term:
DKL(P ||Pφ)[s, a] = DKL(P ||Pφ′)[s, a] + E s′∼P
[ log Pφ′(s ′|s, a)
Pφ(s′|s, a) ] ≥ E s′∼P [ log Pφ′(s ′|s, a)
Pφ(s′|s, a)
] .
(6)
The bound (6) results in a reward shaping of the form
r′(s, a, s′) = r(s, a, s′) + η (logPφ′(s ′|s, a)− logPφ(s′|s, a)) , (7)
which requires a choice of φ′. From (6), we can see that the bound becomes tighter by minimizing DKL(P ||Pφ′). As a result, we choose φ′ to be the parameters of the dynamics model after k updates based on (1), and φ to be the parameters from before the updates. Thus, at iteration t, the reshaped rewards are
r′(s, a, s′) = r(s, a, s′) + η ( logPφt(s ′|s, a)− logPφt−k(s′|s, a) ) ; (8)
here, the intrinsic reward is the k-step learning progress at (s, a, s′). It also bears a resemblance to Bayesian surprise; we expand on this similarity in the next section.
In our experiments, we investigate both the surprisal bonus (5) and the k-step learning progress bonus (8) (with varying values of k).
1On the other hand, if H(P )[s, a] is non-finite everywhere—for instance if the MDP has continuous states and deterministic transitions—then as long as it has the same sign everywhere, Es,a∼π[H(P )[s, a]] is a constant with respect to π and we can drop it from the optimization problem anyway.
3.1 DISCUSSION
Ideally, we would like the intrinsic rewards to vanish in the limit as Pφ → P , because in this case, the agent should have sufficiently explored the state space, and should primarily learn from extrinsic rewards. For the proposed intrinsic reward in (5), this is not the case, and it may result in poor performance in that limit. The thinking goes that when Pφ = P , the agent will be incentivized to seek out states with the noisiest transitions. However, we argue that this may not be an issue, because the intrinsic motivation seems mostly useful long before the dynamics model is fully learned. As long as the agent is able to find the extrinsic rewards before the intrinsic reward is just the entropy in P , the pathological noise-seeking behavior should not happen. On the other hand, the intrinsic reward in (8) should not suffer from this pathology, because in the limit, as the dynamics model converges, we should have Pφt ≈ Pφt−k . Then the intrinsic reward will vanish as desired. Next, we relate (8) to Bayesian surprise. The Bayesian surprise associated with a transition is the reduction in uncertainty over possibly dynamics models from observing it ([1],[8]):
DKL (P (φ|ht, at, st+1)||P (φ|ht)) . Here, P (φ|ht) is meant to represent a distribution over possible dynamics models parametrized by φ given the preceding history of observed states and actions ht (so ht includes st), and P (φ|ht, at, st+1) is the posterior distribution over dynamics models after observing (at, st+1). By Bayes’ rule, the dynamics prior and posterior are related to the model-based transition probabilities by
P (φ|ht, at, st+1) = P (φ|ht)P (st+1|ht, at, φ)
Eφ∼P (·|ht) [P (st+1|ht, at, φ)] ,
so the Bayesian surprise can be expressed as E
φ∼Pt+1 [logP (st+1|ht, at, φ)]− log E φ∼Pt [P (st+1|ht, at, φ)] , (9)
where Pt+1 = P (·|ht, at, st+1) is the posterior and Pt = P (·|ht) is the prior. In this form, the resemblance between (9) and (8) is clarified. Although the update from φt−k to φt is not Bayesian— and is performed in batch, instead of per transition sample—we can imagine (8) might contain similar information to (9).
3.2 IMPLEMENTATION DETAILS
Our implementation usesL2 regularization in the dynamics model fitting, and we impose an additional constraint to keep model iterates close in the KL-divergence sense. Denoting the average divergence as
D̄KL(Pφ′ ||Pφ) = 1 |D| ∑
(s,a)∈D
DKL(Pφ′ ||Pφ)[s, a], (10)
our dynamics model update is
φi+1 = arg min φ − 1 |D| ∑ (s,a,s′)∈D logPφ(s ′|s, a) + α‖φ‖22 : D̄KL(Pφ||Pφi) ≤ κ. (11)
The constraint value κ is a hyper-parameter of the algorithm. We solve this optimization problem approximately using a single second-order step with a line search, as described by [20]; full details are given in supplementary material. D is a FIFO replay memory, and at each iteration, instead of using the entirety of D for the update step we sub-sample a batch d ⊂ D. Also, similarly to [7], we adjust the bonus coefficient η at each iteration, to keep the average bonus magnitude upper-bounded (and usually fixed). Let η0 denote the desired average bonus, and r+(s, a, s′) denote the intrinsic reward; then, at each iteration, we set
η = η0 max (
1, 1|B| ∣∣∣∑(s,a,s′)∈B r+(s, a, s′)∣∣∣) , where B is the batch of data used for the policy update step. This normalization improves the stability of the algorithm by keeping the scale of the bonuses fixed with respect to the scale of the extrinsic rewards. Also, in environments where the agent can die, we avoid the possibility of the intrinsic rewards becoming a living cost by translating all bonuses so that the mean is nonnegative. The basic outline of the algorithm is given as Algorithm 1. In all experiments, we use fully-factored Gaussian distributions for the dynamics models, where the means and variances are the outputs of neural networks.
Algorithm 1 Reinforcement Learning with Surprise Incentive Input: Initial policy π0, dynamics model Pφ0 repeat
collect rollouts on current policy πi add rollout (s, a, s′) tuples to replay memory D compute reshaped rewards using (5) or (8) with dynamics model Pφi normalize η by the average intrinsic reward of the current batch of data update policy to πi+1 using any RL algorithm with the reshaped rewards update the dynamics model to Pφi+1 according to (11)
until training is completed
4 EXPERIMENTS
We evaluate our proposed surprise incentives on a wide range of benchmarks that are challenging for naive exploration methods, including continuous control and discrete control tasks. Our continuous control tasks include the slate of sparse reward tasks introduced by Houthooft et al. [7]: sparse MountainCar, sparse CartPoleSwingup, and sparse HalfCheetah, as well as a new sparse reward task that we introduce here: sparse Swimmer. (We refer to these environments with the prefix ‘sparse’ to differentiate them from other versions which appear in the literature, where agents receive non-sparse reward signals.) Additionally, we evaluate performance on a highly-challenging hierarchical sparse reward task introduced by Duan et al [5], SwimmerGather. The discrete action tasks are several games from the Atari RAM domain of the OpenAI Gym [4]: Pong, BankHeist, Freeway, and Venture.
Environments with deterministic and stochastic dynamics are represented in our benchmarks: the continuous control domains have deterministic dynamics, while the Gym Atari RAM games have stochastic dynamics. (In the Atari games, actions are repeated for a random number of frames.)
We use Trust Region Policy Optimization (TRPO) [20], a state-of-the-art policy gradient method, as our base reinforcement learning algorithm throughout our experiments, and we use the rllab implementations of TRPO and the continuous control tasks [5]. Full details for the experimental set-up are included in the appendix.
On all tasks, we compare against TRPO without intrinsic rewards, which we refer to as using naive exploration (in contrast to intrinsically motivated exploration). For the continuous control tasks, we also compare against intrinsic motivation using the L2 model prediction error,
r+(s, a, s ′) = ‖s′ − µφ(s, a)‖2, (12)
where µφ is the mean of the learned Gaussian distribution Pφ. The model prediction error was investigated as intrinsic motivation for deep reinforcement learning by Stadie et al [22], although they used a different method for learning the model µφ. This comparison helps us verify whether or not our proposed form of surprise, as a KL-divergence from the true dynamics model, is useful. Additionally, we compare our performance against the performance reported by Houthooft et al. [7] for Variational Information Maximizing Exploration (VIME), a method where the intrinsic reward associated with a transition approximates its Bayesian surprise using variational methods. Currently, VIME has achieved state-of-the-art results on intrinsic motivation for continuous control.
As a final check for the continuous control tasks, we benchmark the tasks themselves, by measuring the performance of the surprisal bonus without any dynamics learning: r+(s, a, s′) = − logPφ0(s′|s, a), where φ0 are the original random parameters of Pφ. This allows us to verify whether our benchmark tasks actually require surprise to solve at all, or if random exploration strategies successfully solve them.
4.1 CONTINUOUS CONTROL RESULTS
Median performance curves are shown in Figure 1 with interquartile ranges shown in shaded areas. Note that TRPO without intrinsic motivation failed on all tasks: the median score and upper quartile range for naive exploration were zero everywhere. Also note that TRPO with random exploration bonuses failed on most tasks, as shown separately in Figure 2. We found that surprise was not needed to solve MountainCar, but was necessary to perform well on the other tasks.
The surprisal bonus was especially robust across tasks, achieving good results in all domains and substantially exceeding the other baselines on the more challenging ones. The learning progress bonus for k = 1 was successful on CartpoleSwingup and HalfCheetah but it faltered in the others. Its weak performance in MountainCar was due to premature convergence of the dynamics model, which resulted in the agent receiving intrinsic rewards that were identically zero. (Given the simplicity of the environment, it is not surprising that the dynamics model converged so quickly.) In Swimmer, however, it seems that the learning progress bonuses did not inspire sufficient exploration. Because the Swimmer environment is effectively a stepping stone to the harder SwimmerGather, where the agent has to learn a motion primitive and collect target pellets, on SwimmerGather, we only evaluated the intrinsic rewards that had been successful on Swimmer.
Both surprisal and learning progress (with k = 1) exceeded the reported performance of VIME on HalfCheetah by learning to solve the task more quickly. On CartpoleSwingup, however, both were more susceptible to getting stuck in locally optimal policies, resulting in lower median scores than VIME. Surprisal performed comparably to VIME on SwimmerGather, the hardest task in the slate—in the sense that after 1000 iterations, they both reached approximately the same median score—although with greater variance than VIME.
Our results suggest that surprisal is a viable alternative to VIME in terms of performance, and is highly favorable in terms of computational cost. In VIME, a backwards pass through the dynamics model must be computed for every transition tuple separately to compute the intrinsic rewards, whereas our surprisal bonus only requires forward passes through the dynamics model for intrinsic
reward computation. (Limitations of current deep learning tool kits make it difficult to efficiently compute separate backwards passes, whereas almost all of them support highly parallel forward computations.) Furthermore, our dynamics model is substantially simpler than the Bayesian neural network dynamics model of VIME. To illustrate this point, in Figure 3 we show the results of a speed comparison making use of the open-source VIME code [6], with the settings described in the VIME paper. In our speed test, our bonus had a per-iteration speedup of a factor of 3 over VIME.2 We give a full analysis of the potential speedup in Appendix C.
4.2 ATARI RAM DOMAIN RESULTS
Median performance curves are shown in Figure 4, with tasks arranged from (a) to (d) roughly in order of increasing difficulty.
In Pong, naive exploration naturally succeeds, so we are not surprised to see that intrinsic motivation does not improve performance. However, this serves as a sanity check to verify that our intrinsic rewards do not degrade performance. (As an aside, we note that the performance here falls short of the standard score of 20 for this domain because we truncate play at 5000 timesteps.)
In BankHeist, we find that intrinsic motivation accelerates the learning significantly. The agents with surprisal incentives reached high levels of performance (scores > 1000) 10% sooner than naive exploration, while agents with learning progress incentives reached high levels almost 20% sooner.
In Freeway, the median performance for TRPO without intrinsic motivation was adequate, but the lower quartile range was quite poor—only 6 out of 10 runs ever found rewards. With the learning progress incentives, 8 out of 10 runs found rewards; with the surprisal incentive, all 10 did. Freeway is a game with very sparse rewards, where the agent effectively has to cross a long hallway before it can score a point, so naive exploration tends to exhibit random walk behavior and only rarely reaches the reward state. The intrinsic motivation helps the agent explore more purposefully.
2We compute this by comparing the marginal time cost incurred just by the bonus in each case: that is, if Tvime, Tsurprisal, and Tnobonus denote the times to 15 iterations, we obtain the speedup as
Tvime − Tnobonus Tsurprisal − Tnobonus .
In Venture, we obtain our strongest results in the Atari domain. Venture is extremely difficult because the agent has to navigate a large map to find very sparse rewards, and the agent can be killed by enemies interspersed throughout. We found that our intrinsic rewards were able to substantially improve performance over naive exploration in this challenging environment. Here, the best performance was again obtained by the surprisal incentive, which usually inspired the agent to reach scores greater than 500.
4.3 COMPARING INCENTIVES
Among our proposed incentives, we found that surprisal worked the best overall, achieving the most consistent performance across tasks. The learning progress-based incentives worked well on some domains, but generally not as well as surprisal. Interestingly, learning progress with k = 10 performed much worse on the continuous control tasks than with k = 1, but we observed virtually no difference in their performance on the Atari games; it is unclear why this should be the case.
Surprisal strongly outperformed the L2 error based incentive on the harder continuous control tasks, learning to solve them more quickly and without forgetting. Because we used fully-factored Gaussians for all of our dyanmics models, the surprisal had the form
− logPφ(s′|s, a) = n∑ i=1
( (s′i − µφ,i(s, a))2
2σ2φ,i(s, a) + log σφ,i(s, a)
) + k
2 log 2π,
which essentially includes the L2-squared error norm as a sub-expression. The relative difference in performance suggests that the variance terms confer additional useful information about the novelty of a state-action pair.
5 RELATED WORK
Substantial theoretical work has been done on optimal exploration in finite MDPs, resulting in algorithms such as E3 [10], R-max [3], and UCRL [9], which scale polynomially with MDP size. However, these works do not permit obvious generalizations to MDPs with continuous state and action spaces. C-PACE [18] provides a theoretical foundation for PAC-optimal exploration in MDPs with continuous state spaces, but it requires a metric on state spaces. Lopes et al. [11] investigated exploration driven by learning progress and proved theoretical guarantees for their approach in the finite MDP case, but they did not address the question of scaling their approach to continuous or high-dimensional MDPs. Also, although they formulated learning progress in the same way as (8), they formed intrinsic rewards differently. Conceptually and mathematically, our work is closest to prior work on curiosity and surprise [8, 19, 23, 24], although these works focus mainly on small finite MDPs.
Recently, several intrinsic motivation strategies that deal specifically with deep reinforcement learning have been proposed. Stadie et al. [22] learn deterministic dynamics models by minimizing Euclidean loss—whereas in our work, we learn stochastic dynamics with cross entropy loss—and use L2 prediction errors for intrinsic motivation. Houthooft et al. [7] train Bayesian neural networks to approximate posterior distributions over dynamics models given observed data, by maximizing a variational lower bound; they then use second-order approximations of the Bayesian surprise as intrinsic motivation. Bellemare et al. [2] derived pseudo-counts from CTS density models over states and used those to form intrinsic rewards, notably resulting in dramatic performance improvement on Montezuma’s Revenge, one of the hardest games in the Atari domain. Mohamed and Rezende [14] developed a scalable method of approximating empowerment, the mutual information between an agent’s actions and the future state of the environment, using variational methods. Oh et al. [16] estimated state visit frequency using Gaussian kernels to compare against a replay memory, and used these estimates for directed exploration.
6 CONCLUSIONS
In this work, we formulated surprise for intrinsic motivation as the KL-divergence of the true transition probabilities from learned model probabilities, and derived two approximations—surprisal and k-step
learning progress—that are scalable, computationally inexpensive, and suitable for application to high-dimensional and continuous control tasks. We showed that empirically, motivation by surprisal and 1-step learning progress resulted in efficient exploration on several hard deep reinforcement learning benchmarks. In particular, we found that surprisal was a robust and effective intrinsic motivator, outperforming other heuristics on a wide range of tasks, and competitive with the current state-of-the-art for intrinsic motivation in continuous control.
ACKNOWLEDGEMENTS
We thank Rein Houthooft for interesting discussions and for sharing data from the original VIME experiments. We also thank Rocky Duan, Carlos Florensa, Vicenc Rubies-Royo, Dexter Scobee, and Eric Mazumdar for insightful discussions and reviews of the preliminary manuscript.
This work is supported by TRUST (Team for Research in Ubiquitous Secure Technology) which receives support from NSF (award number CCF-0424422).
A SINGLE STEP SECOND-ORDER OPTIMIZATION
In our experiments, we approximately solve several optimization problems by using a single secondorder step with a line search. This section will describe the exact methodology, which was originally given by Schulman et al. [20].
We consider the optimization problem
p∗ = max θ L(θ) : D(θ) ≤ δ, (13)
where θ ∈ Rn, and for some θold we have D(θold) = 0,∇θD(θold) = 0, and∇2θD(θold) 0; also, ∀θ,D(θ) ≥ 0. We suppose that δ is small, so the optimal point will be close to θold. We also suppose that the curvature of the constraint is much greater than the curvature of the objective. As a result, we feel justified in approximating the objective to linear order and the constraint to quadratic order:
L(θ) ≈ L(θold) + gT (θ − θold) g . = ∇θL(θold)
D(θ) ≈ 1 2 (θ − θold)TA(θ − θold) A . = ∇2θD(θold).
We now consider the approximate optimization problem,
p∗ ≈ max θ gT (θ − θold) :
1 2 (θ − θold)TA(θ − θold) ≤ δ.
This optimization problem is convex as long as A 0, which is an assumption that we make. (If this assumption seems to be empirically invalid, then we repair the issue by using the substitution A→ A+ I , where I is the identity matrix, and > 0 is a small constant chosen so that we usually have A+ I 0.) This problem can be solved analytically by applying methods of duality, and its optimal point is
θ∗ = θold +
√ 2δ
gTA−1g A−1g. (14)
It is possible that the parameter update step given by (14) may not exactly solve the original optimization problem (13)—in fact, it may not even satisfy the constraint—so we perform a line search between θold and θ∗. Our update with the line search included is given by
θ = θold + s k
√ 2δ
gTA−1g A−1g, (15)
where s ∈ (0, 1) is a backtracking coefficient, and k is the smallest integer for which L(θ) ≥ L(θold) and D(θ) ≤ δ. We select k by checking each of k = 1, 2, ...,K, where K is the maximum number of backtracks. If there is no value of k in that range which satisfies the conditions, no update is performed.
Because the optimization problems we solve with this method tend to involve thousands of parameters, inverting A is prohibitively computationally expensive. Thus in the implementation of this algorithm that we use, the search direction x = A−1g is found by using the conjugate gradient method to solve Ax = g; this avoids the need to invert A.
When A and g are sample averages meant to stand in for expectations, we employ an additional trick to reduce the total number of computations necessary to solve Ax = g. The computation of A is more expensive than g, and so we use a smaller fraction of the population to estimate it quickly. Concretely, suppose that the original optimization problem’s objective is Ez∼P [L(θ, z)], and the constraint is Ez∼P [D(θ, z)] ≤ δ, where z is some random variable and P is its distribution; furthermore, suppose that we have a dataset of samples D = {zi}i=1,...,N drawn on P , and we form an approximate optimization problem using these samples. Defining g(z) .= ∇θL(θold, z) and A(z)
. = ∇2θD(θold, z), we would need to solve(
1 |D| ∑ z∈D A(z)
) x = 1
|D| ∑ z∈D g(z)
to obtain the search direction x. However, because the computation of the average Hessian is expensive, we sub-sample a batch b ⊂ D to form it. As long as b is a large enough set, then the approximation
1 |b| ∑ z∈b A(z) ≈ 1 |D| ∑ z∈D A(z) ≈ E z∼P [A(z)]
is good, and the search direction we obtain by solving( 1
|b| ∑ z∈b A(z)
) x = 1
|D| ∑ z∈D g(z)
is reasonable. The sub-sample ratio |b|/|D| is a hyperparameter of the algorithm.
B EXPERIMENT DETAILS
B.1 ENVIRONMENTS
The environments have the following state and action spaces: for the sparse MountainCar environment, S ⊆ R2, A ⊆ R1; for the sparse CartpoleSwingup task, S ⊆ R4, A ⊆ R1; for the sparse HalfCheetah
task, S ⊂ R20, A ⊆ R6; for the sparse Swimmer task, S ⊆ R13, A ⊆ R2; for the SwimmerGather task, S ⊆ R33, A ⊆ R2; for the Atari RAM domain, S ⊆ R128, A ⊆ {1, ..., 18}. For the sparse MountainCar task, the agent receives a reward of 1 only when it escapes the valley. For the sparse CartpoleSwingup task, the agent receives a reward of 1 only when cos(β) > 0.8, with β the pole angle. For the sparse HalfCheetah task, the agent receives a reward of 1 when xbody ≥ 5. For the sparse Swimmer task, the agent receives a reward of 1 + |vbody| when |xbody| ≥ 2. Atari RAM states, by default, take on values from 0 to 256 in integer intervals. We use a simple preprocessing step to map them onto values in (−1/3, 1/3). Let x denote the raw RAM state, and s the preprocessed RAM state:
s = 1
3 ( x 128 − 1 ) .
B.2 POLICY AND VALUE FUNCTIONS
For all continuous control tasks we used fully-factored Gaussian policies, where the means of the action distributions were the outputs of neural networks, and the variances were separate trainable parameters. For the sparse MountainCar and sparse CartpoleSwingup tasks, the policy mean networks had a single hidden layer of 32 units. For sparse HalfCheetah, sparse Swimmer, and SwimmerGather, the policy mean networks were of size (64, 32). For the Atari RAM tasks, we used categorical distributions over actions, produced by neural networks of size (64, 32).
The value functions used for the sparse MountainCar and sparse CartpoleSwingup tasks were neural networks with a single hidden layer of 32 units. For sparse HalfCheetah, sparse Swimmer, and SwimmerGather, time-varying linear value functions were used, as described by Duan et al. [5]. For the Atari RAM tasks, the value functions were neural networks of size (64, 32). The neural network value functions were learned via single second-order step optimization; the linear baselines were obtained by least-squares fit at each iteration.
All neural networks were feed-forward, fully-connected networks with tanh activation units.
B.3 TRPO HYPERPARAMETERS
For all tasks, the MDP discount factor γ was fixed to 0.995, and generalized advantage estimators (GAE) [21] were used, with the GAE λ parameter fixed to 0.95.
In the table below, we show several other TRPO hyperparameters. Batch size refers to steps of experience collected at each iteration. The sub-sample factor is for the second-order optimization step, as detailed in Appendix A.
B.4 EXPLORATION HYPERPARAMETERS
For all tasks, fully-factored Gaussian distributions were used as dynamics models, where the means and variances of the distributions were the outputs of neural networks.
For the sparse MountainCar and sparse CartpoleSwingup tasks, the means and variances were parametrized by single hidden layer neural networks with 32 units. For all other tasks, the means and variances were parametrized by neural networks with two hidden layers of size 64 units each. All networks used tanh activation functions.
For all continuous control tasks except SwimmerGather, we used replay memories of size 5, 000, 000, and a KL-divergence step size of κ = 0.001. For SwimmerGather, the replay memory was the same size, but we set the KL-divergence size to κ = 0.005. For the Atari RAM domain tasks, we used replay memories of size 1, 000, 000, and a KL-divergence step size of κ = 0.01.
For all tasks except SwimmerGather and Venture, 5000 time steps of experience were sampled from the replay memory at each iteration of dynamics model learning to take a stochastic step on (11), and a sub-sample factor of 1 was used in the second-order step optimizer. For SwimmerGather and Venture, 10, 000 time steps of experience were sampled at each iteration, and a sub-sample factor of 0.5 was used in the optimizer.
For all continuous control tasks, the L2 penalty coefficient was set to α = 1. For the Atari RAM tasks except for Venture, it was set to α = 0.01. For Venture, it was set to α = 0.1.
For all continuous control tasks except SwimmerGather, η0 = 0.001. For SwimmerGather, η0 = 0.0001. For the Atari RAM tasks, η0 = 0.005.
C ANALYSIS OF SPEEDUP COMPARED TO VIME
In this section, we provide an analysis of the time cost incurred by using VIME or our bonuses, and derive the potential magnitude of speedup attained by our bonuses versus VIME.
At each iteration, bonuses based on learned dynamics models incur two primary costs:
• the time cost of fitting the dynamics model, • and the time cost of computing the rewards.
We denote the dynamics fitting costs for VIME and our methods as T fitvime and T fit ours. Although the Bayesian neural network dynamics model for VIME is more complex than our model, the fit times can work out to be similar depending on the choice of fitting algorithm. In our speed test, the fit times were nearly equivalent, but used different algorithms.
For the time cost of computing rewards, we first introduce the following quantities:
• n: the number of CPU threads available, • tf : time for a forward pass through the model, • tb: time for a backward pass through the model, • N : batch size (number of samples per iteration), • k: the number of forward passes that can be performed simultaneously.
For our method, the time cost of computing rewards is
T rewours = Ntf kn .
For VIME, things are more complex. Each reward requires the computation of a gradient through its model, which necessitates a forward and a backward pass. Because gradient calculations cannot be efficiently parallelized by any deep learning toolkits currently available3, each (s, a, s′) tuple requires its own forward/backward pass. As a result, the time cost of computing rewards for VIME is:
T rewvime = N(tf + tb)
n .
The speedup of our method over VIME is therefore
T fitvime + N(tf+tb) n
T fitours + Ntf kn
.
In the limit of large N , and with the approximation that tf ≈ tb, the speedup is a factor of ∼ 2k. 3If this is not correct, please contact the authors so that we can issue a correction! But to the best of our knowledge, this is currently true, at time of publication. | 1. What is the focus of the paper in deep reinforcement learning?
2. How does the proposed approach differ from other recent methods in intrinsic motivation?
3. What are the strengths of the paper, particularly in its explanations and backing up of the method?
4. Are there any concerns or suggestions for improvement regarding the presentation of the framework and its approximations?
5. How does the exploration bonus compare to other recent works on intrinsic motivation, such as Bellemare et al. 2016? | Review | Review
The authors present a novel approach to surprise-based intrinsic motivation in deep reinforcement learning. The authors clearly explain the difference from other recent approaches to intrinsic motivation and back up their method with results from a broad class of discrete and continuous action domains. They present two tractable approximations to their framework - one which ignores the stochasticity of the true environmental dynamics, and one which approximates the rate of information gain (somewhat similar to Schmidhuber's formal theory of creativity, fun and intrinsic motivation). The results of this exploration bonus when added to TRPO are generally better than standard TRPO. However, I would have appreciated a more thorough comparison against other recent work on intrinsic motivation. For instance, Bellemare et al 2016 recently achieved significant performance gains on challenging Atari games like Montezuma's Revenge by combining DQN with an exploration bonus, however Montezuma's Revenge is not presented as an experiment here. Such comparisons would significantly improve the strength of the paper. |
ICLR | Title
Surprise-Based Intrinsic Motivation for Deep Reinforcement Learning
Abstract
Exploration in complex domains is a key challenge in reinforcement learning, especially for tasks with very sparse rewards. Recent successes in deep reinforcement learning have been achieved mostly using simple heuristic exploration strategies such as -greedy action selection or Gaussian control noise, but there are many tasks where these methods are insufficient to make any learning progress. Here, we consider more complex heuristics: efficient and scalable exploration strategies that maximize a notion of an agent’s surprise about its experiences via intrinsic motivation. We propose to learn a model of the MDP transition probabilities concurrently with the policy, and to form intrinsic rewards that approximate the KL-divergence of the true transition probabilities from the learned model. One of our approximations results in using surprisal as intrinsic motivation, while the other gives the k-step learning progress. We show that our incentives enable agents to succeed in a wide range of environments with high-dimensional state spaces and very sparse rewards, including continuous control tasks and games in the Atari RAM domain, outperforming several other heuristic exploration techniques.
1 INTRODUCTION
A reinforcement learning agent uses experiences obtained from interacting with an unknown environment to learn behavior that maximizes a reward signal. The optimality of the learned behavior is strongly dependent on how the agent approaches the exploration/exploitation trade-off in that environment. If it explores poorly or too little, it may never find rewards from which to learn, and its behavior will always remain suboptimal; if it does find rewards but exploits them too intensely, it may wind up prematurely converging to suboptimal behaviors, and fail to discover more rewarding opportunities. Although substantial theoretical work has been done on optimal exploration strategies for environments with finite state and action spaces, we are here concerned with problems that have continuous state and/or action spaces, where algorithms with theoretical guarantees admit no obvious generalization or are prohibitively impractical to implement.
Simple heuristic methods of exploring such as -greedy action selection and Gaussian control noise have been successful on a wide range of tasks, but are inadequate when rewards are especially sparse. For example, the Deep Q-Network approach of Mnih et al. [13] used -greedy exploration in training deep neural networks to play Atari games directly from raw pixels. On many games, the algorithm resulted in superhuman play; however, on games like Montezuma’s Revenge, where rewards are extremely sparse, DQN (and its variants [25], [26], [15], [12]) with -greedy exploration failed to achieve scores even at the level of a novice human. Similarly, in benchmarking deep reinforcement learning for continuous control, Duan et al.[5] found that policy optimization algorithms that explored by acting according to the current stochastic policy, including REINFORCE and Trust Region Policy Optimization (TRPO), could succeed across a diverse slate of simulated robotics control tasks with well-defined, non-sparse reward signals (like rewards proportional to the forward velocity of the robot). Yet, when tested in environments with sparse rewards—where the agent would only be able to attain rewards after first figuring out complex motion primitives without reinforcement—every algorithm failed to attain scores better than random agents. The failure modes in all of these cases pertained to the nature of the exploration: the agents encountered reward signals so infrequently that they were never able to learn reward-seeking behavior.
One approach to encourage better exploration is via intrinsic motivation, where an agent has a task-independent, often information-theoretic intrinsic reward function which it seeks to maximize in addition to the reward from the environment. Examples of intrinsic motivation include empowerment, where the agent enjoys the level of control it has about its future; surprise, where the agent is excited to see outcomes that run contrary to its understanding of the world; and novelty, where the agent is excited to see new states (which is tightly connected to surprise, as shown in [2]). For in-depth reviews of the different types of intrinsic motivation, we direct the reader to [1] and [17].
Recently, several applications of intrinsic motivation to the deep reinforcement learning setting (such as [2], [7], [22]) have found promising success. In this work, we build on that success by exploring scalable measures of surprise for intrinsic motivation in deep reinforcement learning. We formulate surprise as the KL-divergence of the true transition probability distribution from a transition model which is learned concurrently with the policy, and consider two approximations to this divergence which are easy to compute in practice. One of these approximations results in using the surprisal of a transition as an intrinsic reward; the other results in using a measure of learning progress which is closer to a Bayesian concept of surprise. Our contributions are as follows:
1. we investigate surprisal and learning progress as intrinsic rewards across a wide range of environments in the deep reinforcement learning setting, and demonstrate empirically that the incentives (especially surprisal) result in efficient exploration,
2. we evaluate the difficulty of the slate of sparse reward continuous control tasks introduced by Houthooft et al. [7] to benchmark exploration incentives, and introduce a new task to complement the slate,
3. and we present an efficient method for learning the dynamics model (transition probabilities) concurrently with a policy.
We distinguish our work from prior work in a number of implementation details: unlike Bellemare et al. [2], we learn a transition model as opposed to a state-action occupancy density; unlike Stadie et al. [22], our formulation naturally encompasses environments with stochastic dynamics; unlike Houthooft et al. [7], we avoid the overhead of maintaining a distribution over possible dynamics models, and learn a single deep dynamics model.
In our empirical evaluations, we compare the performance of our proposed intrinsic rewards with other heuristic intrinsic reward schemes and to recent results from the literature. In particular, we compare to Variational Information Maximizing Exploration (VIME) [7], a method which approximately maximizes Bayesian surprise and currently achieves state-of-the-art performance on continuous control with sparse rewards. We show that our incentives can perform on the level of VIME at a lower computational cost.
2 PRELIMINARIES
We begin by introducing notation which we will use throughout the paper. A Markov decision process (MDP) is a tuple, (S,A,R, P, µ), where S is the set of states, A is the set of actions, R : S × A × S → R is the reward function, P : S × A × S → [0, 1] is the transition probability function (where P (s′|s, a) is the probability of transitioning to state s′ given that the previous state was s and the agent took action a in s), and µ : S → [0, 1] is the starting state distribution. A policy π : S × A → [0, 1] is a distribution over actions per state, with π(a|s) the probability of selecting a in state s. We aim to select a policy π which maximizes a performance measure, L(π), which usually takes the form of expected finite-horizon total return (sum of rewards in a fixed time period), or expected infinite-horizon discounted total return (discounted sum of all rewards forever). In this paper, we use the finite-horizon total return formulation.
3 SURPRISE INCENTIVES
To train an agent with surprise-based exploration, we alternate between making an update step to a dynamics model (an approximator of the MDP’s transition probability function), and making a policy update step that maximizes a trade-off between policy performance and a surprise measure.
The dynamics model step makes progress on the optimization problem
min φ − 1 |D| ∑ (s,a,s′)∈D logPφ(s ′|s, a) + αf(φ), (1)
where D is is a dataset of transition tuples from the environment, Pφ is the model we are learning, f is a regularization function, and α > 0 is a regularization trade-off coefficient. The policy update step makes progress on an approximation to the optimization problem
max π L(π) + η E s,a∼π
[DKL(P ||Pφ)[s, a]] , (2)
where η > 0 is an explore-exploit trade-off coefficient. The exploration incentive in (2), which we select to be the on-policy average KL-divergence of Pφ from P , is intended to capture the agent’s surprise about its experience. The dynamics model Pφ should only be close to P on regions of the transition state space that the agent has already visited (because those transitions will appear in D and thus the model will be fit to them), and as a result, the KL divergence of Pφ and P will be higher in unfamiliar places. Essentially, this exploits the generalization in the model to encourage the agent to go where it has not gone before. The surprise incentive in (2) gives the net effect of performing a reward shaping of the form
r′(s, a, s′) = r(s, a, s′) + η (logP (s′|s, a)− logPφ(s′|s, a)) , (3) where r(s, a, s′) is the original reward and r′(s, a, s′) is the transformed reward, so ideally we could solve (2) by applying any reinforcement learning algorithm with these reshaped rewards. In practice, we cannot directly implement this reward reshaping because P is unknown. Instead, we consider two ways of finding an approximate solution to (2).
In one method, we approximate the KL-divergence by the cross-entropy, which is reasonable when H(P ) is finite (and small) and Pφ is sufficiently far from P 1; that is, denoting the cross-entropy by H(P, Pφ)[s, a] . = Es′∼P (·|s,a)[− logPφ(s′|s, a)], we assume
DKL(P ||Pφ)[s, a] = H(P, Pφ)[s, a]−H(P )[s, a] ≈ H(P, Pφ)[s, a].
(4)
This approximation results in a reward shaping of the form
r′(s, a, s′) = r(s, a, s′)− η logPφ(s′|s, a); (5) here, the intrinsic reward is the surprisal of s′ given the model Pφ and the context (s, a).
In the other method, we maximize a lower bound on the objective in (2) by lower bounding the surprise term:
DKL(P ||Pφ)[s, a] = DKL(P ||Pφ′)[s, a] + E s′∼P
[ log Pφ′(s ′|s, a)
Pφ(s′|s, a) ] ≥ E s′∼P [ log Pφ′(s ′|s, a)
Pφ(s′|s, a)
] .
(6)
The bound (6) results in a reward shaping of the form
r′(s, a, s′) = r(s, a, s′) + η (logPφ′(s ′|s, a)− logPφ(s′|s, a)) , (7)
which requires a choice of φ′. From (6), we can see that the bound becomes tighter by minimizing DKL(P ||Pφ′). As a result, we choose φ′ to be the parameters of the dynamics model after k updates based on (1), and φ to be the parameters from before the updates. Thus, at iteration t, the reshaped rewards are
r′(s, a, s′) = r(s, a, s′) + η ( logPφt(s ′|s, a)− logPφt−k(s′|s, a) ) ; (8)
here, the intrinsic reward is the k-step learning progress at (s, a, s′). It also bears a resemblance to Bayesian surprise; we expand on this similarity in the next section.
In our experiments, we investigate both the surprisal bonus (5) and the k-step learning progress bonus (8) (with varying values of k).
1On the other hand, if H(P )[s, a] is non-finite everywhere—for instance if the MDP has continuous states and deterministic transitions—then as long as it has the same sign everywhere, Es,a∼π[H(P )[s, a]] is a constant with respect to π and we can drop it from the optimization problem anyway.
3.1 DISCUSSION
Ideally, we would like the intrinsic rewards to vanish in the limit as Pφ → P , because in this case, the agent should have sufficiently explored the state space, and should primarily learn from extrinsic rewards. For the proposed intrinsic reward in (5), this is not the case, and it may result in poor performance in that limit. The thinking goes that when Pφ = P , the agent will be incentivized to seek out states with the noisiest transitions. However, we argue that this may not be an issue, because the intrinsic motivation seems mostly useful long before the dynamics model is fully learned. As long as the agent is able to find the extrinsic rewards before the intrinsic reward is just the entropy in P , the pathological noise-seeking behavior should not happen. On the other hand, the intrinsic reward in (8) should not suffer from this pathology, because in the limit, as the dynamics model converges, we should have Pφt ≈ Pφt−k . Then the intrinsic reward will vanish as desired. Next, we relate (8) to Bayesian surprise. The Bayesian surprise associated with a transition is the reduction in uncertainty over possibly dynamics models from observing it ([1],[8]):
DKL (P (φ|ht, at, st+1)||P (φ|ht)) . Here, P (φ|ht) is meant to represent a distribution over possible dynamics models parametrized by φ given the preceding history of observed states and actions ht (so ht includes st), and P (φ|ht, at, st+1) is the posterior distribution over dynamics models after observing (at, st+1). By Bayes’ rule, the dynamics prior and posterior are related to the model-based transition probabilities by
P (φ|ht, at, st+1) = P (φ|ht)P (st+1|ht, at, φ)
Eφ∼P (·|ht) [P (st+1|ht, at, φ)] ,
so the Bayesian surprise can be expressed as E
φ∼Pt+1 [logP (st+1|ht, at, φ)]− log E φ∼Pt [P (st+1|ht, at, φ)] , (9)
where Pt+1 = P (·|ht, at, st+1) is the posterior and Pt = P (·|ht) is the prior. In this form, the resemblance between (9) and (8) is clarified. Although the update from φt−k to φt is not Bayesian— and is performed in batch, instead of per transition sample—we can imagine (8) might contain similar information to (9).
3.2 IMPLEMENTATION DETAILS
Our implementation usesL2 regularization in the dynamics model fitting, and we impose an additional constraint to keep model iterates close in the KL-divergence sense. Denoting the average divergence as
D̄KL(Pφ′ ||Pφ) = 1 |D| ∑
(s,a)∈D
DKL(Pφ′ ||Pφ)[s, a], (10)
our dynamics model update is
φi+1 = arg min φ − 1 |D| ∑ (s,a,s′)∈D logPφ(s ′|s, a) + α‖φ‖22 : D̄KL(Pφ||Pφi) ≤ κ. (11)
The constraint value κ is a hyper-parameter of the algorithm. We solve this optimization problem approximately using a single second-order step with a line search, as described by [20]; full details are given in supplementary material. D is a FIFO replay memory, and at each iteration, instead of using the entirety of D for the update step we sub-sample a batch d ⊂ D. Also, similarly to [7], we adjust the bonus coefficient η at each iteration, to keep the average bonus magnitude upper-bounded (and usually fixed). Let η0 denote the desired average bonus, and r+(s, a, s′) denote the intrinsic reward; then, at each iteration, we set
η = η0 max (
1, 1|B| ∣∣∣∑(s,a,s′)∈B r+(s, a, s′)∣∣∣) , where B is the batch of data used for the policy update step. This normalization improves the stability of the algorithm by keeping the scale of the bonuses fixed with respect to the scale of the extrinsic rewards. Also, in environments where the agent can die, we avoid the possibility of the intrinsic rewards becoming a living cost by translating all bonuses so that the mean is nonnegative. The basic outline of the algorithm is given as Algorithm 1. In all experiments, we use fully-factored Gaussian distributions for the dynamics models, where the means and variances are the outputs of neural networks.
Algorithm 1 Reinforcement Learning with Surprise Incentive Input: Initial policy π0, dynamics model Pφ0 repeat
collect rollouts on current policy πi add rollout (s, a, s′) tuples to replay memory D compute reshaped rewards using (5) or (8) with dynamics model Pφi normalize η by the average intrinsic reward of the current batch of data update policy to πi+1 using any RL algorithm with the reshaped rewards update the dynamics model to Pφi+1 according to (11)
until training is completed
4 EXPERIMENTS
We evaluate our proposed surprise incentives on a wide range of benchmarks that are challenging for naive exploration methods, including continuous control and discrete control tasks. Our continuous control tasks include the slate of sparse reward tasks introduced by Houthooft et al. [7]: sparse MountainCar, sparse CartPoleSwingup, and sparse HalfCheetah, as well as a new sparse reward task that we introduce here: sparse Swimmer. (We refer to these environments with the prefix ‘sparse’ to differentiate them from other versions which appear in the literature, where agents receive non-sparse reward signals.) Additionally, we evaluate performance on a highly-challenging hierarchical sparse reward task introduced by Duan et al [5], SwimmerGather. The discrete action tasks are several games from the Atari RAM domain of the OpenAI Gym [4]: Pong, BankHeist, Freeway, and Venture.
Environments with deterministic and stochastic dynamics are represented in our benchmarks: the continuous control domains have deterministic dynamics, while the Gym Atari RAM games have stochastic dynamics. (In the Atari games, actions are repeated for a random number of frames.)
We use Trust Region Policy Optimization (TRPO) [20], a state-of-the-art policy gradient method, as our base reinforcement learning algorithm throughout our experiments, and we use the rllab implementations of TRPO and the continuous control tasks [5]. Full details for the experimental set-up are included in the appendix.
On all tasks, we compare against TRPO without intrinsic rewards, which we refer to as using naive exploration (in contrast to intrinsically motivated exploration). For the continuous control tasks, we also compare against intrinsic motivation using the L2 model prediction error,
r+(s, a, s ′) = ‖s′ − µφ(s, a)‖2, (12)
where µφ is the mean of the learned Gaussian distribution Pφ. The model prediction error was investigated as intrinsic motivation for deep reinforcement learning by Stadie et al [22], although they used a different method for learning the model µφ. This comparison helps us verify whether or not our proposed form of surprise, as a KL-divergence from the true dynamics model, is useful. Additionally, we compare our performance against the performance reported by Houthooft et al. [7] for Variational Information Maximizing Exploration (VIME), a method where the intrinsic reward associated with a transition approximates its Bayesian surprise using variational methods. Currently, VIME has achieved state-of-the-art results on intrinsic motivation for continuous control.
As a final check for the continuous control tasks, we benchmark the tasks themselves, by measuring the performance of the surprisal bonus without any dynamics learning: r+(s, a, s′) = − logPφ0(s′|s, a), where φ0 are the original random parameters of Pφ. This allows us to verify whether our benchmark tasks actually require surprise to solve at all, or if random exploration strategies successfully solve them.
4.1 CONTINUOUS CONTROL RESULTS
Median performance curves are shown in Figure 1 with interquartile ranges shown in shaded areas. Note that TRPO without intrinsic motivation failed on all tasks: the median score and upper quartile range for naive exploration were zero everywhere. Also note that TRPO with random exploration bonuses failed on most tasks, as shown separately in Figure 2. We found that surprise was not needed to solve MountainCar, but was necessary to perform well on the other tasks.
The surprisal bonus was especially robust across tasks, achieving good results in all domains and substantially exceeding the other baselines on the more challenging ones. The learning progress bonus for k = 1 was successful on CartpoleSwingup and HalfCheetah but it faltered in the others. Its weak performance in MountainCar was due to premature convergence of the dynamics model, which resulted in the agent receiving intrinsic rewards that were identically zero. (Given the simplicity of the environment, it is not surprising that the dynamics model converged so quickly.) In Swimmer, however, it seems that the learning progress bonuses did not inspire sufficient exploration. Because the Swimmer environment is effectively a stepping stone to the harder SwimmerGather, where the agent has to learn a motion primitive and collect target pellets, on SwimmerGather, we only evaluated the intrinsic rewards that had been successful on Swimmer.
Both surprisal and learning progress (with k = 1) exceeded the reported performance of VIME on HalfCheetah by learning to solve the task more quickly. On CartpoleSwingup, however, both were more susceptible to getting stuck in locally optimal policies, resulting in lower median scores than VIME. Surprisal performed comparably to VIME on SwimmerGather, the hardest task in the slate—in the sense that after 1000 iterations, they both reached approximately the same median score—although with greater variance than VIME.
Our results suggest that surprisal is a viable alternative to VIME in terms of performance, and is highly favorable in terms of computational cost. In VIME, a backwards pass through the dynamics model must be computed for every transition tuple separately to compute the intrinsic rewards, whereas our surprisal bonus only requires forward passes through the dynamics model for intrinsic
reward computation. (Limitations of current deep learning tool kits make it difficult to efficiently compute separate backwards passes, whereas almost all of them support highly parallel forward computations.) Furthermore, our dynamics model is substantially simpler than the Bayesian neural network dynamics model of VIME. To illustrate this point, in Figure 3 we show the results of a speed comparison making use of the open-source VIME code [6], with the settings described in the VIME paper. In our speed test, our bonus had a per-iteration speedup of a factor of 3 over VIME.2 We give a full analysis of the potential speedup in Appendix C.
4.2 ATARI RAM DOMAIN RESULTS
Median performance curves are shown in Figure 4, with tasks arranged from (a) to (d) roughly in order of increasing difficulty.
In Pong, naive exploration naturally succeeds, so we are not surprised to see that intrinsic motivation does not improve performance. However, this serves as a sanity check to verify that our intrinsic rewards do not degrade performance. (As an aside, we note that the performance here falls short of the standard score of 20 for this domain because we truncate play at 5000 timesteps.)
In BankHeist, we find that intrinsic motivation accelerates the learning significantly. The agents with surprisal incentives reached high levels of performance (scores > 1000) 10% sooner than naive exploration, while agents with learning progress incentives reached high levels almost 20% sooner.
In Freeway, the median performance for TRPO without intrinsic motivation was adequate, but the lower quartile range was quite poor—only 6 out of 10 runs ever found rewards. With the learning progress incentives, 8 out of 10 runs found rewards; with the surprisal incentive, all 10 did. Freeway is a game with very sparse rewards, where the agent effectively has to cross a long hallway before it can score a point, so naive exploration tends to exhibit random walk behavior and only rarely reaches the reward state. The intrinsic motivation helps the agent explore more purposefully.
2We compute this by comparing the marginal time cost incurred just by the bonus in each case: that is, if Tvime, Tsurprisal, and Tnobonus denote the times to 15 iterations, we obtain the speedup as
Tvime − Tnobonus Tsurprisal − Tnobonus .
In Venture, we obtain our strongest results in the Atari domain. Venture is extremely difficult because the agent has to navigate a large map to find very sparse rewards, and the agent can be killed by enemies interspersed throughout. We found that our intrinsic rewards were able to substantially improve performance over naive exploration in this challenging environment. Here, the best performance was again obtained by the surprisal incentive, which usually inspired the agent to reach scores greater than 500.
4.3 COMPARING INCENTIVES
Among our proposed incentives, we found that surprisal worked the best overall, achieving the most consistent performance across tasks. The learning progress-based incentives worked well on some domains, but generally not as well as surprisal. Interestingly, learning progress with k = 10 performed much worse on the continuous control tasks than with k = 1, but we observed virtually no difference in their performance on the Atari games; it is unclear why this should be the case.
Surprisal strongly outperformed the L2 error based incentive on the harder continuous control tasks, learning to solve them more quickly and without forgetting. Because we used fully-factored Gaussians for all of our dyanmics models, the surprisal had the form
− logPφ(s′|s, a) = n∑ i=1
( (s′i − µφ,i(s, a))2
2σ2φ,i(s, a) + log σφ,i(s, a)
) + k
2 log 2π,
which essentially includes the L2-squared error norm as a sub-expression. The relative difference in performance suggests that the variance terms confer additional useful information about the novelty of a state-action pair.
5 RELATED WORK
Substantial theoretical work has been done on optimal exploration in finite MDPs, resulting in algorithms such as E3 [10], R-max [3], and UCRL [9], which scale polynomially with MDP size. However, these works do not permit obvious generalizations to MDPs with continuous state and action spaces. C-PACE [18] provides a theoretical foundation for PAC-optimal exploration in MDPs with continuous state spaces, but it requires a metric on state spaces. Lopes et al. [11] investigated exploration driven by learning progress and proved theoretical guarantees for their approach in the finite MDP case, but they did not address the question of scaling their approach to continuous or high-dimensional MDPs. Also, although they formulated learning progress in the same way as (8), they formed intrinsic rewards differently. Conceptually and mathematically, our work is closest to prior work on curiosity and surprise [8, 19, 23, 24], although these works focus mainly on small finite MDPs.
Recently, several intrinsic motivation strategies that deal specifically with deep reinforcement learning have been proposed. Stadie et al. [22] learn deterministic dynamics models by minimizing Euclidean loss—whereas in our work, we learn stochastic dynamics with cross entropy loss—and use L2 prediction errors for intrinsic motivation. Houthooft et al. [7] train Bayesian neural networks to approximate posterior distributions over dynamics models given observed data, by maximizing a variational lower bound; they then use second-order approximations of the Bayesian surprise as intrinsic motivation. Bellemare et al. [2] derived pseudo-counts from CTS density models over states and used those to form intrinsic rewards, notably resulting in dramatic performance improvement on Montezuma’s Revenge, one of the hardest games in the Atari domain. Mohamed and Rezende [14] developed a scalable method of approximating empowerment, the mutual information between an agent’s actions and the future state of the environment, using variational methods. Oh et al. [16] estimated state visit frequency using Gaussian kernels to compare against a replay memory, and used these estimates for directed exploration.
6 CONCLUSIONS
In this work, we formulated surprise for intrinsic motivation as the KL-divergence of the true transition probabilities from learned model probabilities, and derived two approximations—surprisal and k-step
learning progress—that are scalable, computationally inexpensive, and suitable for application to high-dimensional and continuous control tasks. We showed that empirically, motivation by surprisal and 1-step learning progress resulted in efficient exploration on several hard deep reinforcement learning benchmarks. In particular, we found that surprisal was a robust and effective intrinsic motivator, outperforming other heuristics on a wide range of tasks, and competitive with the current state-of-the-art for intrinsic motivation in continuous control.
ACKNOWLEDGEMENTS
We thank Rein Houthooft for interesting discussions and for sharing data from the original VIME experiments. We also thank Rocky Duan, Carlos Florensa, Vicenc Rubies-Royo, Dexter Scobee, and Eric Mazumdar for insightful discussions and reviews of the preliminary manuscript.
This work is supported by TRUST (Team for Research in Ubiquitous Secure Technology) which receives support from NSF (award number CCF-0424422).
A SINGLE STEP SECOND-ORDER OPTIMIZATION
In our experiments, we approximately solve several optimization problems by using a single secondorder step with a line search. This section will describe the exact methodology, which was originally given by Schulman et al. [20].
We consider the optimization problem
p∗ = max θ L(θ) : D(θ) ≤ δ, (13)
where θ ∈ Rn, and for some θold we have D(θold) = 0,∇θD(θold) = 0, and∇2θD(θold) 0; also, ∀θ,D(θ) ≥ 0. We suppose that δ is small, so the optimal point will be close to θold. We also suppose that the curvature of the constraint is much greater than the curvature of the objective. As a result, we feel justified in approximating the objective to linear order and the constraint to quadratic order:
L(θ) ≈ L(θold) + gT (θ − θold) g . = ∇θL(θold)
D(θ) ≈ 1 2 (θ − θold)TA(θ − θold) A . = ∇2θD(θold).
We now consider the approximate optimization problem,
p∗ ≈ max θ gT (θ − θold) :
1 2 (θ − θold)TA(θ − θold) ≤ δ.
This optimization problem is convex as long as A 0, which is an assumption that we make. (If this assumption seems to be empirically invalid, then we repair the issue by using the substitution A→ A+ I , where I is the identity matrix, and > 0 is a small constant chosen so that we usually have A+ I 0.) This problem can be solved analytically by applying methods of duality, and its optimal point is
θ∗ = θold +
√ 2δ
gTA−1g A−1g. (14)
It is possible that the parameter update step given by (14) may not exactly solve the original optimization problem (13)—in fact, it may not even satisfy the constraint—so we perform a line search between θold and θ∗. Our update with the line search included is given by
θ = θold + s k
√ 2δ
gTA−1g A−1g, (15)
where s ∈ (0, 1) is a backtracking coefficient, and k is the smallest integer for which L(θ) ≥ L(θold) and D(θ) ≤ δ. We select k by checking each of k = 1, 2, ...,K, where K is the maximum number of backtracks. If there is no value of k in that range which satisfies the conditions, no update is performed.
Because the optimization problems we solve with this method tend to involve thousands of parameters, inverting A is prohibitively computationally expensive. Thus in the implementation of this algorithm that we use, the search direction x = A−1g is found by using the conjugate gradient method to solve Ax = g; this avoids the need to invert A.
When A and g are sample averages meant to stand in for expectations, we employ an additional trick to reduce the total number of computations necessary to solve Ax = g. The computation of A is more expensive than g, and so we use a smaller fraction of the population to estimate it quickly. Concretely, suppose that the original optimization problem’s objective is Ez∼P [L(θ, z)], and the constraint is Ez∼P [D(θ, z)] ≤ δ, where z is some random variable and P is its distribution; furthermore, suppose that we have a dataset of samples D = {zi}i=1,...,N drawn on P , and we form an approximate optimization problem using these samples. Defining g(z) .= ∇θL(θold, z) and A(z)
. = ∇2θD(θold, z), we would need to solve(
1 |D| ∑ z∈D A(z)
) x = 1
|D| ∑ z∈D g(z)
to obtain the search direction x. However, because the computation of the average Hessian is expensive, we sub-sample a batch b ⊂ D to form it. As long as b is a large enough set, then the approximation
1 |b| ∑ z∈b A(z) ≈ 1 |D| ∑ z∈D A(z) ≈ E z∼P [A(z)]
is good, and the search direction we obtain by solving( 1
|b| ∑ z∈b A(z)
) x = 1
|D| ∑ z∈D g(z)
is reasonable. The sub-sample ratio |b|/|D| is a hyperparameter of the algorithm.
B EXPERIMENT DETAILS
B.1 ENVIRONMENTS
The environments have the following state and action spaces: for the sparse MountainCar environment, S ⊆ R2, A ⊆ R1; for the sparse CartpoleSwingup task, S ⊆ R4, A ⊆ R1; for the sparse HalfCheetah
task, S ⊂ R20, A ⊆ R6; for the sparse Swimmer task, S ⊆ R13, A ⊆ R2; for the SwimmerGather task, S ⊆ R33, A ⊆ R2; for the Atari RAM domain, S ⊆ R128, A ⊆ {1, ..., 18}. For the sparse MountainCar task, the agent receives a reward of 1 only when it escapes the valley. For the sparse CartpoleSwingup task, the agent receives a reward of 1 only when cos(β) > 0.8, with β the pole angle. For the sparse HalfCheetah task, the agent receives a reward of 1 when xbody ≥ 5. For the sparse Swimmer task, the agent receives a reward of 1 + |vbody| when |xbody| ≥ 2. Atari RAM states, by default, take on values from 0 to 256 in integer intervals. We use a simple preprocessing step to map them onto values in (−1/3, 1/3). Let x denote the raw RAM state, and s the preprocessed RAM state:
s = 1
3 ( x 128 − 1 ) .
B.2 POLICY AND VALUE FUNCTIONS
For all continuous control tasks we used fully-factored Gaussian policies, where the means of the action distributions were the outputs of neural networks, and the variances were separate trainable parameters. For the sparse MountainCar and sparse CartpoleSwingup tasks, the policy mean networks had a single hidden layer of 32 units. For sparse HalfCheetah, sparse Swimmer, and SwimmerGather, the policy mean networks were of size (64, 32). For the Atari RAM tasks, we used categorical distributions over actions, produced by neural networks of size (64, 32).
The value functions used for the sparse MountainCar and sparse CartpoleSwingup tasks were neural networks with a single hidden layer of 32 units. For sparse HalfCheetah, sparse Swimmer, and SwimmerGather, time-varying linear value functions were used, as described by Duan et al. [5]. For the Atari RAM tasks, the value functions were neural networks of size (64, 32). The neural network value functions were learned via single second-order step optimization; the linear baselines were obtained by least-squares fit at each iteration.
All neural networks were feed-forward, fully-connected networks with tanh activation units.
B.3 TRPO HYPERPARAMETERS
For all tasks, the MDP discount factor γ was fixed to 0.995, and generalized advantage estimators (GAE) [21] were used, with the GAE λ parameter fixed to 0.95.
In the table below, we show several other TRPO hyperparameters. Batch size refers to steps of experience collected at each iteration. The sub-sample factor is for the second-order optimization step, as detailed in Appendix A.
B.4 EXPLORATION HYPERPARAMETERS
For all tasks, fully-factored Gaussian distributions were used as dynamics models, where the means and variances of the distributions were the outputs of neural networks.
For the sparse MountainCar and sparse CartpoleSwingup tasks, the means and variances were parametrized by single hidden layer neural networks with 32 units. For all other tasks, the means and variances were parametrized by neural networks with two hidden layers of size 64 units each. All networks used tanh activation functions.
For all continuous control tasks except SwimmerGather, we used replay memories of size 5, 000, 000, and a KL-divergence step size of κ = 0.001. For SwimmerGather, the replay memory was the same size, but we set the KL-divergence size to κ = 0.005. For the Atari RAM domain tasks, we used replay memories of size 1, 000, 000, and a KL-divergence step size of κ = 0.01.
For all tasks except SwimmerGather and Venture, 5000 time steps of experience were sampled from the replay memory at each iteration of dynamics model learning to take a stochastic step on (11), and a sub-sample factor of 1 was used in the second-order step optimizer. For SwimmerGather and Venture, 10, 000 time steps of experience were sampled at each iteration, and a sub-sample factor of 0.5 was used in the optimizer.
For all continuous control tasks, the L2 penalty coefficient was set to α = 1. For the Atari RAM tasks except for Venture, it was set to α = 0.01. For Venture, it was set to α = 0.1.
For all continuous control tasks except SwimmerGather, η0 = 0.001. For SwimmerGather, η0 = 0.0001. For the Atari RAM tasks, η0 = 0.005.
C ANALYSIS OF SPEEDUP COMPARED TO VIME
In this section, we provide an analysis of the time cost incurred by using VIME or our bonuses, and derive the potential magnitude of speedup attained by our bonuses versus VIME.
At each iteration, bonuses based on learned dynamics models incur two primary costs:
• the time cost of fitting the dynamics model, • and the time cost of computing the rewards.
We denote the dynamics fitting costs for VIME and our methods as T fitvime and T fit ours. Although the Bayesian neural network dynamics model for VIME is more complex than our model, the fit times can work out to be similar depending on the choice of fitting algorithm. In our speed test, the fit times were nearly equivalent, but used different algorithms.
For the time cost of computing rewards, we first introduce the following quantities:
• n: the number of CPU threads available, • tf : time for a forward pass through the model, • tb: time for a backward pass through the model, • N : batch size (number of samples per iteration), • k: the number of forward passes that can be performed simultaneously.
For our method, the time cost of computing rewards is
T rewours = Ntf kn .
For VIME, things are more complex. Each reward requires the computation of a gradient through its model, which necessitates a forward and a backward pass. Because gradient calculations cannot be efficiently parallelized by any deep learning toolkits currently available3, each (s, a, s′) tuple requires its own forward/backward pass. As a result, the time cost of computing rewards for VIME is:
T rewvime = N(tf + tb)
n .
The speedup of our method over VIME is therefore
T fitvime + N(tf+tb) n
T fitours + Ntf kn
.
In the limit of large N , and with the approximation that tf ≈ tb, the speedup is a factor of ∼ 2k. 3If this is not correct, please contact the authors so that we can issue a correction! But to the best of our knowledge, this is currently true, at time of publication. | 1. What is the focus of the paper in reinforcement learning?
2. What are the strengths and weaknesses of the proposed intrinsic reward method?
3. How does the reviewer assess the novelty and performance of the proposed algorithms compared to prior works like VIME?
4. Are there any concerns or suggestions regarding the implementation and numerical measurements presented in the paper? | Review | Review
This paper provides a surprise-based intrinsic reward method for reinforcement learning, along with two practical algorithms for estimating those rewards. The ideas are similar to previous work in intrinsic motivation (including VIME and other work in intrinsic motivation).
As a positive, the methods are simple to implement, and provide benefits on a number of tasks.
However, they are almost always outmatched by VIME, and not one of their proposed method is consistently the best of those proposed (perhaps the most consistent is the surprisal, which is unfortunately not asymptotically equal to the true reward). The authors claim massive speed up, but the numerical measurements show that VIME is slower to initialize but not significantly slower per iteration otherwise (perhaps a big O analysis would clarify the claims).
Overall it's a decent, simple technique, perhaps slightly incremental on previous state of the art. |
ICLR | Title
Surprise-Based Intrinsic Motivation for Deep Reinforcement Learning
Abstract
Exploration in complex domains is a key challenge in reinforcement learning, especially for tasks with very sparse rewards. Recent successes in deep reinforcement learning have been achieved mostly using simple heuristic exploration strategies such as -greedy action selection or Gaussian control noise, but there are many tasks where these methods are insufficient to make any learning progress. Here, we consider more complex heuristics: efficient and scalable exploration strategies that maximize a notion of an agent’s surprise about its experiences via intrinsic motivation. We propose to learn a model of the MDP transition probabilities concurrently with the policy, and to form intrinsic rewards that approximate the KL-divergence of the true transition probabilities from the learned model. One of our approximations results in using surprisal as intrinsic motivation, while the other gives the k-step learning progress. We show that our incentives enable agents to succeed in a wide range of environments with high-dimensional state spaces and very sparse rewards, including continuous control tasks and games in the Atari RAM domain, outperforming several other heuristic exploration techniques.
1 INTRODUCTION
A reinforcement learning agent uses experiences obtained from interacting with an unknown environment to learn behavior that maximizes a reward signal. The optimality of the learned behavior is strongly dependent on how the agent approaches the exploration/exploitation trade-off in that environment. If it explores poorly or too little, it may never find rewards from which to learn, and its behavior will always remain suboptimal; if it does find rewards but exploits them too intensely, it may wind up prematurely converging to suboptimal behaviors, and fail to discover more rewarding opportunities. Although substantial theoretical work has been done on optimal exploration strategies for environments with finite state and action spaces, we are here concerned with problems that have continuous state and/or action spaces, where algorithms with theoretical guarantees admit no obvious generalization or are prohibitively impractical to implement.
Simple heuristic methods of exploring such as -greedy action selection and Gaussian control noise have been successful on a wide range of tasks, but are inadequate when rewards are especially sparse. For example, the Deep Q-Network approach of Mnih et al. [13] used -greedy exploration in training deep neural networks to play Atari games directly from raw pixels. On many games, the algorithm resulted in superhuman play; however, on games like Montezuma’s Revenge, where rewards are extremely sparse, DQN (and its variants [25], [26], [15], [12]) with -greedy exploration failed to achieve scores even at the level of a novice human. Similarly, in benchmarking deep reinforcement learning for continuous control, Duan et al.[5] found that policy optimization algorithms that explored by acting according to the current stochastic policy, including REINFORCE and Trust Region Policy Optimization (TRPO), could succeed across a diverse slate of simulated robotics control tasks with well-defined, non-sparse reward signals (like rewards proportional to the forward velocity of the robot). Yet, when tested in environments with sparse rewards—where the agent would only be able to attain rewards after first figuring out complex motion primitives without reinforcement—every algorithm failed to attain scores better than random agents. The failure modes in all of these cases pertained to the nature of the exploration: the agents encountered reward signals so infrequently that they were never able to learn reward-seeking behavior.
One approach to encourage better exploration is via intrinsic motivation, where an agent has a task-independent, often information-theoretic intrinsic reward function which it seeks to maximize in addition to the reward from the environment. Examples of intrinsic motivation include empowerment, where the agent enjoys the level of control it has about its future; surprise, where the agent is excited to see outcomes that run contrary to its understanding of the world; and novelty, where the agent is excited to see new states (which is tightly connected to surprise, as shown in [2]). For in-depth reviews of the different types of intrinsic motivation, we direct the reader to [1] and [17].
Recently, several applications of intrinsic motivation to the deep reinforcement learning setting (such as [2], [7], [22]) have found promising success. In this work, we build on that success by exploring scalable measures of surprise for intrinsic motivation in deep reinforcement learning. We formulate surprise as the KL-divergence of the true transition probability distribution from a transition model which is learned concurrently with the policy, and consider two approximations to this divergence which are easy to compute in practice. One of these approximations results in using the surprisal of a transition as an intrinsic reward; the other results in using a measure of learning progress which is closer to a Bayesian concept of surprise. Our contributions are as follows:
1. we investigate surprisal and learning progress as intrinsic rewards across a wide range of environments in the deep reinforcement learning setting, and demonstrate empirically that the incentives (especially surprisal) result in efficient exploration,
2. we evaluate the difficulty of the slate of sparse reward continuous control tasks introduced by Houthooft et al. [7] to benchmark exploration incentives, and introduce a new task to complement the slate,
3. and we present an efficient method for learning the dynamics model (transition probabilities) concurrently with a policy.
We distinguish our work from prior work in a number of implementation details: unlike Bellemare et al. [2], we learn a transition model as opposed to a state-action occupancy density; unlike Stadie et al. [22], our formulation naturally encompasses environments with stochastic dynamics; unlike Houthooft et al. [7], we avoid the overhead of maintaining a distribution over possible dynamics models, and learn a single deep dynamics model.
In our empirical evaluations, we compare the performance of our proposed intrinsic rewards with other heuristic intrinsic reward schemes and to recent results from the literature. In particular, we compare to Variational Information Maximizing Exploration (VIME) [7], a method which approximately maximizes Bayesian surprise and currently achieves state-of-the-art performance on continuous control with sparse rewards. We show that our incentives can perform on the level of VIME at a lower computational cost.
2 PRELIMINARIES
We begin by introducing notation which we will use throughout the paper. A Markov decision process (MDP) is a tuple, (S,A,R, P, µ), where S is the set of states, A is the set of actions, R : S × A × S → R is the reward function, P : S × A × S → [0, 1] is the transition probability function (where P (s′|s, a) is the probability of transitioning to state s′ given that the previous state was s and the agent took action a in s), and µ : S → [0, 1] is the starting state distribution. A policy π : S × A → [0, 1] is a distribution over actions per state, with π(a|s) the probability of selecting a in state s. We aim to select a policy π which maximizes a performance measure, L(π), which usually takes the form of expected finite-horizon total return (sum of rewards in a fixed time period), or expected infinite-horizon discounted total return (discounted sum of all rewards forever). In this paper, we use the finite-horizon total return formulation.
3 SURPRISE INCENTIVES
To train an agent with surprise-based exploration, we alternate between making an update step to a dynamics model (an approximator of the MDP’s transition probability function), and making a policy update step that maximizes a trade-off between policy performance and a surprise measure.
The dynamics model step makes progress on the optimization problem
min φ − 1 |D| ∑ (s,a,s′)∈D logPφ(s ′|s, a) + αf(φ), (1)
where D is is a dataset of transition tuples from the environment, Pφ is the model we are learning, f is a regularization function, and α > 0 is a regularization trade-off coefficient. The policy update step makes progress on an approximation to the optimization problem
max π L(π) + η E s,a∼π
[DKL(P ||Pφ)[s, a]] , (2)
where η > 0 is an explore-exploit trade-off coefficient. The exploration incentive in (2), which we select to be the on-policy average KL-divergence of Pφ from P , is intended to capture the agent’s surprise about its experience. The dynamics model Pφ should only be close to P on regions of the transition state space that the agent has already visited (because those transitions will appear in D and thus the model will be fit to them), and as a result, the KL divergence of Pφ and P will be higher in unfamiliar places. Essentially, this exploits the generalization in the model to encourage the agent to go where it has not gone before. The surprise incentive in (2) gives the net effect of performing a reward shaping of the form
r′(s, a, s′) = r(s, a, s′) + η (logP (s′|s, a)− logPφ(s′|s, a)) , (3) where r(s, a, s′) is the original reward and r′(s, a, s′) is the transformed reward, so ideally we could solve (2) by applying any reinforcement learning algorithm with these reshaped rewards. In practice, we cannot directly implement this reward reshaping because P is unknown. Instead, we consider two ways of finding an approximate solution to (2).
In one method, we approximate the KL-divergence by the cross-entropy, which is reasonable when H(P ) is finite (and small) and Pφ is sufficiently far from P 1; that is, denoting the cross-entropy by H(P, Pφ)[s, a] . = Es′∼P (·|s,a)[− logPφ(s′|s, a)], we assume
DKL(P ||Pφ)[s, a] = H(P, Pφ)[s, a]−H(P )[s, a] ≈ H(P, Pφ)[s, a].
(4)
This approximation results in a reward shaping of the form
r′(s, a, s′) = r(s, a, s′)− η logPφ(s′|s, a); (5) here, the intrinsic reward is the surprisal of s′ given the model Pφ and the context (s, a).
In the other method, we maximize a lower bound on the objective in (2) by lower bounding the surprise term:
DKL(P ||Pφ)[s, a] = DKL(P ||Pφ′)[s, a] + E s′∼P
[ log Pφ′(s ′|s, a)
Pφ(s′|s, a) ] ≥ E s′∼P [ log Pφ′(s ′|s, a)
Pφ(s′|s, a)
] .
(6)
The bound (6) results in a reward shaping of the form
r′(s, a, s′) = r(s, a, s′) + η (logPφ′(s ′|s, a)− logPφ(s′|s, a)) , (7)
which requires a choice of φ′. From (6), we can see that the bound becomes tighter by minimizing DKL(P ||Pφ′). As a result, we choose φ′ to be the parameters of the dynamics model after k updates based on (1), and φ to be the parameters from before the updates. Thus, at iteration t, the reshaped rewards are
r′(s, a, s′) = r(s, a, s′) + η ( logPφt(s ′|s, a)− logPφt−k(s′|s, a) ) ; (8)
here, the intrinsic reward is the k-step learning progress at (s, a, s′). It also bears a resemblance to Bayesian surprise; we expand on this similarity in the next section.
In our experiments, we investigate both the surprisal bonus (5) and the k-step learning progress bonus (8) (with varying values of k).
1On the other hand, if H(P )[s, a] is non-finite everywhere—for instance if the MDP has continuous states and deterministic transitions—then as long as it has the same sign everywhere, Es,a∼π[H(P )[s, a]] is a constant with respect to π and we can drop it from the optimization problem anyway.
3.1 DISCUSSION
Ideally, we would like the intrinsic rewards to vanish in the limit as Pφ → P , because in this case, the agent should have sufficiently explored the state space, and should primarily learn from extrinsic rewards. For the proposed intrinsic reward in (5), this is not the case, and it may result in poor performance in that limit. The thinking goes that when Pφ = P , the agent will be incentivized to seek out states with the noisiest transitions. However, we argue that this may not be an issue, because the intrinsic motivation seems mostly useful long before the dynamics model is fully learned. As long as the agent is able to find the extrinsic rewards before the intrinsic reward is just the entropy in P , the pathological noise-seeking behavior should not happen. On the other hand, the intrinsic reward in (8) should not suffer from this pathology, because in the limit, as the dynamics model converges, we should have Pφt ≈ Pφt−k . Then the intrinsic reward will vanish as desired. Next, we relate (8) to Bayesian surprise. The Bayesian surprise associated with a transition is the reduction in uncertainty over possibly dynamics models from observing it ([1],[8]):
DKL (P (φ|ht, at, st+1)||P (φ|ht)) . Here, P (φ|ht) is meant to represent a distribution over possible dynamics models parametrized by φ given the preceding history of observed states and actions ht (so ht includes st), and P (φ|ht, at, st+1) is the posterior distribution over dynamics models after observing (at, st+1). By Bayes’ rule, the dynamics prior and posterior are related to the model-based transition probabilities by
P (φ|ht, at, st+1) = P (φ|ht)P (st+1|ht, at, φ)
Eφ∼P (·|ht) [P (st+1|ht, at, φ)] ,
so the Bayesian surprise can be expressed as E
φ∼Pt+1 [logP (st+1|ht, at, φ)]− log E φ∼Pt [P (st+1|ht, at, φ)] , (9)
where Pt+1 = P (·|ht, at, st+1) is the posterior and Pt = P (·|ht) is the prior. In this form, the resemblance between (9) and (8) is clarified. Although the update from φt−k to φt is not Bayesian— and is performed in batch, instead of per transition sample—we can imagine (8) might contain similar information to (9).
3.2 IMPLEMENTATION DETAILS
Our implementation usesL2 regularization in the dynamics model fitting, and we impose an additional constraint to keep model iterates close in the KL-divergence sense. Denoting the average divergence as
D̄KL(Pφ′ ||Pφ) = 1 |D| ∑
(s,a)∈D
DKL(Pφ′ ||Pφ)[s, a], (10)
our dynamics model update is
φi+1 = arg min φ − 1 |D| ∑ (s,a,s′)∈D logPφ(s ′|s, a) + α‖φ‖22 : D̄KL(Pφ||Pφi) ≤ κ. (11)
The constraint value κ is a hyper-parameter of the algorithm. We solve this optimization problem approximately using a single second-order step with a line search, as described by [20]; full details are given in supplementary material. D is a FIFO replay memory, and at each iteration, instead of using the entirety of D for the update step we sub-sample a batch d ⊂ D. Also, similarly to [7], we adjust the bonus coefficient η at each iteration, to keep the average bonus magnitude upper-bounded (and usually fixed). Let η0 denote the desired average bonus, and r+(s, a, s′) denote the intrinsic reward; then, at each iteration, we set
η = η0 max (
1, 1|B| ∣∣∣∑(s,a,s′)∈B r+(s, a, s′)∣∣∣) , where B is the batch of data used for the policy update step. This normalization improves the stability of the algorithm by keeping the scale of the bonuses fixed with respect to the scale of the extrinsic rewards. Also, in environments where the agent can die, we avoid the possibility of the intrinsic rewards becoming a living cost by translating all bonuses so that the mean is nonnegative. The basic outline of the algorithm is given as Algorithm 1. In all experiments, we use fully-factored Gaussian distributions for the dynamics models, where the means and variances are the outputs of neural networks.
Algorithm 1 Reinforcement Learning with Surprise Incentive Input: Initial policy π0, dynamics model Pφ0 repeat
collect rollouts on current policy πi add rollout (s, a, s′) tuples to replay memory D compute reshaped rewards using (5) or (8) with dynamics model Pφi normalize η by the average intrinsic reward of the current batch of data update policy to πi+1 using any RL algorithm with the reshaped rewards update the dynamics model to Pφi+1 according to (11)
until training is completed
4 EXPERIMENTS
We evaluate our proposed surprise incentives on a wide range of benchmarks that are challenging for naive exploration methods, including continuous control and discrete control tasks. Our continuous control tasks include the slate of sparse reward tasks introduced by Houthooft et al. [7]: sparse MountainCar, sparse CartPoleSwingup, and sparse HalfCheetah, as well as a new sparse reward task that we introduce here: sparse Swimmer. (We refer to these environments with the prefix ‘sparse’ to differentiate them from other versions which appear in the literature, where agents receive non-sparse reward signals.) Additionally, we evaluate performance on a highly-challenging hierarchical sparse reward task introduced by Duan et al [5], SwimmerGather. The discrete action tasks are several games from the Atari RAM domain of the OpenAI Gym [4]: Pong, BankHeist, Freeway, and Venture.
Environments with deterministic and stochastic dynamics are represented in our benchmarks: the continuous control domains have deterministic dynamics, while the Gym Atari RAM games have stochastic dynamics. (In the Atari games, actions are repeated for a random number of frames.)
We use Trust Region Policy Optimization (TRPO) [20], a state-of-the-art policy gradient method, as our base reinforcement learning algorithm throughout our experiments, and we use the rllab implementations of TRPO and the continuous control tasks [5]. Full details for the experimental set-up are included in the appendix.
On all tasks, we compare against TRPO without intrinsic rewards, which we refer to as using naive exploration (in contrast to intrinsically motivated exploration). For the continuous control tasks, we also compare against intrinsic motivation using the L2 model prediction error,
r+(s, a, s ′) = ‖s′ − µφ(s, a)‖2, (12)
where µφ is the mean of the learned Gaussian distribution Pφ. The model prediction error was investigated as intrinsic motivation for deep reinforcement learning by Stadie et al [22], although they used a different method for learning the model µφ. This comparison helps us verify whether or not our proposed form of surprise, as a KL-divergence from the true dynamics model, is useful. Additionally, we compare our performance against the performance reported by Houthooft et al. [7] for Variational Information Maximizing Exploration (VIME), a method where the intrinsic reward associated with a transition approximates its Bayesian surprise using variational methods. Currently, VIME has achieved state-of-the-art results on intrinsic motivation for continuous control.
As a final check for the continuous control tasks, we benchmark the tasks themselves, by measuring the performance of the surprisal bonus without any dynamics learning: r+(s, a, s′) = − logPφ0(s′|s, a), where φ0 are the original random parameters of Pφ. This allows us to verify whether our benchmark tasks actually require surprise to solve at all, or if random exploration strategies successfully solve them.
4.1 CONTINUOUS CONTROL RESULTS
Median performance curves are shown in Figure 1 with interquartile ranges shown in shaded areas. Note that TRPO without intrinsic motivation failed on all tasks: the median score and upper quartile range for naive exploration were zero everywhere. Also note that TRPO with random exploration bonuses failed on most tasks, as shown separately in Figure 2. We found that surprise was not needed to solve MountainCar, but was necessary to perform well on the other tasks.
The surprisal bonus was especially robust across tasks, achieving good results in all domains and substantially exceeding the other baselines on the more challenging ones. The learning progress bonus for k = 1 was successful on CartpoleSwingup and HalfCheetah but it faltered in the others. Its weak performance in MountainCar was due to premature convergence of the dynamics model, which resulted in the agent receiving intrinsic rewards that were identically zero. (Given the simplicity of the environment, it is not surprising that the dynamics model converged so quickly.) In Swimmer, however, it seems that the learning progress bonuses did not inspire sufficient exploration. Because the Swimmer environment is effectively a stepping stone to the harder SwimmerGather, where the agent has to learn a motion primitive and collect target pellets, on SwimmerGather, we only evaluated the intrinsic rewards that had been successful on Swimmer.
Both surprisal and learning progress (with k = 1) exceeded the reported performance of VIME on HalfCheetah by learning to solve the task more quickly. On CartpoleSwingup, however, both were more susceptible to getting stuck in locally optimal policies, resulting in lower median scores than VIME. Surprisal performed comparably to VIME on SwimmerGather, the hardest task in the slate—in the sense that after 1000 iterations, they both reached approximately the same median score—although with greater variance than VIME.
Our results suggest that surprisal is a viable alternative to VIME in terms of performance, and is highly favorable in terms of computational cost. In VIME, a backwards pass through the dynamics model must be computed for every transition tuple separately to compute the intrinsic rewards, whereas our surprisal bonus only requires forward passes through the dynamics model for intrinsic
reward computation. (Limitations of current deep learning tool kits make it difficult to efficiently compute separate backwards passes, whereas almost all of them support highly parallel forward computations.) Furthermore, our dynamics model is substantially simpler than the Bayesian neural network dynamics model of VIME. To illustrate this point, in Figure 3 we show the results of a speed comparison making use of the open-source VIME code [6], with the settings described in the VIME paper. In our speed test, our bonus had a per-iteration speedup of a factor of 3 over VIME.2 We give a full analysis of the potential speedup in Appendix C.
4.2 ATARI RAM DOMAIN RESULTS
Median performance curves are shown in Figure 4, with tasks arranged from (a) to (d) roughly in order of increasing difficulty.
In Pong, naive exploration naturally succeeds, so we are not surprised to see that intrinsic motivation does not improve performance. However, this serves as a sanity check to verify that our intrinsic rewards do not degrade performance. (As an aside, we note that the performance here falls short of the standard score of 20 for this domain because we truncate play at 5000 timesteps.)
In BankHeist, we find that intrinsic motivation accelerates the learning significantly. The agents with surprisal incentives reached high levels of performance (scores > 1000) 10% sooner than naive exploration, while agents with learning progress incentives reached high levels almost 20% sooner.
In Freeway, the median performance for TRPO without intrinsic motivation was adequate, but the lower quartile range was quite poor—only 6 out of 10 runs ever found rewards. With the learning progress incentives, 8 out of 10 runs found rewards; with the surprisal incentive, all 10 did. Freeway is a game with very sparse rewards, where the agent effectively has to cross a long hallway before it can score a point, so naive exploration tends to exhibit random walk behavior and only rarely reaches the reward state. The intrinsic motivation helps the agent explore more purposefully.
2We compute this by comparing the marginal time cost incurred just by the bonus in each case: that is, if Tvime, Tsurprisal, and Tnobonus denote the times to 15 iterations, we obtain the speedup as
Tvime − Tnobonus Tsurprisal − Tnobonus .
In Venture, we obtain our strongest results in the Atari domain. Venture is extremely difficult because the agent has to navigate a large map to find very sparse rewards, and the agent can be killed by enemies interspersed throughout. We found that our intrinsic rewards were able to substantially improve performance over naive exploration in this challenging environment. Here, the best performance was again obtained by the surprisal incentive, which usually inspired the agent to reach scores greater than 500.
4.3 COMPARING INCENTIVES
Among our proposed incentives, we found that surprisal worked the best overall, achieving the most consistent performance across tasks. The learning progress-based incentives worked well on some domains, but generally not as well as surprisal. Interestingly, learning progress with k = 10 performed much worse on the continuous control tasks than with k = 1, but we observed virtually no difference in their performance on the Atari games; it is unclear why this should be the case.
Surprisal strongly outperformed the L2 error based incentive on the harder continuous control tasks, learning to solve them more quickly and without forgetting. Because we used fully-factored Gaussians for all of our dyanmics models, the surprisal had the form
− logPφ(s′|s, a) = n∑ i=1
( (s′i − µφ,i(s, a))2
2σ2φ,i(s, a) + log σφ,i(s, a)
) + k
2 log 2π,
which essentially includes the L2-squared error norm as a sub-expression. The relative difference in performance suggests that the variance terms confer additional useful information about the novelty of a state-action pair.
5 RELATED WORK
Substantial theoretical work has been done on optimal exploration in finite MDPs, resulting in algorithms such as E3 [10], R-max [3], and UCRL [9], which scale polynomially with MDP size. However, these works do not permit obvious generalizations to MDPs with continuous state and action spaces. C-PACE [18] provides a theoretical foundation for PAC-optimal exploration in MDPs with continuous state spaces, but it requires a metric on state spaces. Lopes et al. [11] investigated exploration driven by learning progress and proved theoretical guarantees for their approach in the finite MDP case, but they did not address the question of scaling their approach to continuous or high-dimensional MDPs. Also, although they formulated learning progress in the same way as (8), they formed intrinsic rewards differently. Conceptually and mathematically, our work is closest to prior work on curiosity and surprise [8, 19, 23, 24], although these works focus mainly on small finite MDPs.
Recently, several intrinsic motivation strategies that deal specifically with deep reinforcement learning have been proposed. Stadie et al. [22] learn deterministic dynamics models by minimizing Euclidean loss—whereas in our work, we learn stochastic dynamics with cross entropy loss—and use L2 prediction errors for intrinsic motivation. Houthooft et al. [7] train Bayesian neural networks to approximate posterior distributions over dynamics models given observed data, by maximizing a variational lower bound; they then use second-order approximations of the Bayesian surprise as intrinsic motivation. Bellemare et al. [2] derived pseudo-counts from CTS density models over states and used those to form intrinsic rewards, notably resulting in dramatic performance improvement on Montezuma’s Revenge, one of the hardest games in the Atari domain. Mohamed and Rezende [14] developed a scalable method of approximating empowerment, the mutual information between an agent’s actions and the future state of the environment, using variational methods. Oh et al. [16] estimated state visit frequency using Gaussian kernels to compare against a replay memory, and used these estimates for directed exploration.
6 CONCLUSIONS
In this work, we formulated surprise for intrinsic motivation as the KL-divergence of the true transition probabilities from learned model probabilities, and derived two approximations—surprisal and k-step
learning progress—that are scalable, computationally inexpensive, and suitable for application to high-dimensional and continuous control tasks. We showed that empirically, motivation by surprisal and 1-step learning progress resulted in efficient exploration on several hard deep reinforcement learning benchmarks. In particular, we found that surprisal was a robust and effective intrinsic motivator, outperforming other heuristics on a wide range of tasks, and competitive with the current state-of-the-art for intrinsic motivation in continuous control.
ACKNOWLEDGEMENTS
We thank Rein Houthooft for interesting discussions and for sharing data from the original VIME experiments. We also thank Rocky Duan, Carlos Florensa, Vicenc Rubies-Royo, Dexter Scobee, and Eric Mazumdar for insightful discussions and reviews of the preliminary manuscript.
This work is supported by TRUST (Team for Research in Ubiquitous Secure Technology) which receives support from NSF (award number CCF-0424422).
A SINGLE STEP SECOND-ORDER OPTIMIZATION
In our experiments, we approximately solve several optimization problems by using a single secondorder step with a line search. This section will describe the exact methodology, which was originally given by Schulman et al. [20].
We consider the optimization problem
p∗ = max θ L(θ) : D(θ) ≤ δ, (13)
where θ ∈ Rn, and for some θold we have D(θold) = 0,∇θD(θold) = 0, and∇2θD(θold) 0; also, ∀θ,D(θ) ≥ 0. We suppose that δ is small, so the optimal point will be close to θold. We also suppose that the curvature of the constraint is much greater than the curvature of the objective. As a result, we feel justified in approximating the objective to linear order and the constraint to quadratic order:
L(θ) ≈ L(θold) + gT (θ − θold) g . = ∇θL(θold)
D(θ) ≈ 1 2 (θ − θold)TA(θ − θold) A . = ∇2θD(θold).
We now consider the approximate optimization problem,
p∗ ≈ max θ gT (θ − θold) :
1 2 (θ − θold)TA(θ − θold) ≤ δ.
This optimization problem is convex as long as A 0, which is an assumption that we make. (If this assumption seems to be empirically invalid, then we repair the issue by using the substitution A→ A+ I , where I is the identity matrix, and > 0 is a small constant chosen so that we usually have A+ I 0.) This problem can be solved analytically by applying methods of duality, and its optimal point is
θ∗ = θold +
√ 2δ
gTA−1g A−1g. (14)
It is possible that the parameter update step given by (14) may not exactly solve the original optimization problem (13)—in fact, it may not even satisfy the constraint—so we perform a line search between θold and θ∗. Our update with the line search included is given by
θ = θold + s k
√ 2δ
gTA−1g A−1g, (15)
where s ∈ (0, 1) is a backtracking coefficient, and k is the smallest integer for which L(θ) ≥ L(θold) and D(θ) ≤ δ. We select k by checking each of k = 1, 2, ...,K, where K is the maximum number of backtracks. If there is no value of k in that range which satisfies the conditions, no update is performed.
Because the optimization problems we solve with this method tend to involve thousands of parameters, inverting A is prohibitively computationally expensive. Thus in the implementation of this algorithm that we use, the search direction x = A−1g is found by using the conjugate gradient method to solve Ax = g; this avoids the need to invert A.
When A and g are sample averages meant to stand in for expectations, we employ an additional trick to reduce the total number of computations necessary to solve Ax = g. The computation of A is more expensive than g, and so we use a smaller fraction of the population to estimate it quickly. Concretely, suppose that the original optimization problem’s objective is Ez∼P [L(θ, z)], and the constraint is Ez∼P [D(θ, z)] ≤ δ, where z is some random variable and P is its distribution; furthermore, suppose that we have a dataset of samples D = {zi}i=1,...,N drawn on P , and we form an approximate optimization problem using these samples. Defining g(z) .= ∇θL(θold, z) and A(z)
. = ∇2θD(θold, z), we would need to solve(
1 |D| ∑ z∈D A(z)
) x = 1
|D| ∑ z∈D g(z)
to obtain the search direction x. However, because the computation of the average Hessian is expensive, we sub-sample a batch b ⊂ D to form it. As long as b is a large enough set, then the approximation
1 |b| ∑ z∈b A(z) ≈ 1 |D| ∑ z∈D A(z) ≈ E z∼P [A(z)]
is good, and the search direction we obtain by solving( 1
|b| ∑ z∈b A(z)
) x = 1
|D| ∑ z∈D g(z)
is reasonable. The sub-sample ratio |b|/|D| is a hyperparameter of the algorithm.
B EXPERIMENT DETAILS
B.1 ENVIRONMENTS
The environments have the following state and action spaces: for the sparse MountainCar environment, S ⊆ R2, A ⊆ R1; for the sparse CartpoleSwingup task, S ⊆ R4, A ⊆ R1; for the sparse HalfCheetah
task, S ⊂ R20, A ⊆ R6; for the sparse Swimmer task, S ⊆ R13, A ⊆ R2; for the SwimmerGather task, S ⊆ R33, A ⊆ R2; for the Atari RAM domain, S ⊆ R128, A ⊆ {1, ..., 18}. For the sparse MountainCar task, the agent receives a reward of 1 only when it escapes the valley. For the sparse CartpoleSwingup task, the agent receives a reward of 1 only when cos(β) > 0.8, with β the pole angle. For the sparse HalfCheetah task, the agent receives a reward of 1 when xbody ≥ 5. For the sparse Swimmer task, the agent receives a reward of 1 + |vbody| when |xbody| ≥ 2. Atari RAM states, by default, take on values from 0 to 256 in integer intervals. We use a simple preprocessing step to map them onto values in (−1/3, 1/3). Let x denote the raw RAM state, and s the preprocessed RAM state:
s = 1
3 ( x 128 − 1 ) .
B.2 POLICY AND VALUE FUNCTIONS
For all continuous control tasks we used fully-factored Gaussian policies, where the means of the action distributions were the outputs of neural networks, and the variances were separate trainable parameters. For the sparse MountainCar and sparse CartpoleSwingup tasks, the policy mean networks had a single hidden layer of 32 units. For sparse HalfCheetah, sparse Swimmer, and SwimmerGather, the policy mean networks were of size (64, 32). For the Atari RAM tasks, we used categorical distributions over actions, produced by neural networks of size (64, 32).
The value functions used for the sparse MountainCar and sparse CartpoleSwingup tasks were neural networks with a single hidden layer of 32 units. For sparse HalfCheetah, sparse Swimmer, and SwimmerGather, time-varying linear value functions were used, as described by Duan et al. [5]. For the Atari RAM tasks, the value functions were neural networks of size (64, 32). The neural network value functions were learned via single second-order step optimization; the linear baselines were obtained by least-squares fit at each iteration.
All neural networks were feed-forward, fully-connected networks with tanh activation units.
B.3 TRPO HYPERPARAMETERS
For all tasks, the MDP discount factor γ was fixed to 0.995, and generalized advantage estimators (GAE) [21] were used, with the GAE λ parameter fixed to 0.95.
In the table below, we show several other TRPO hyperparameters. Batch size refers to steps of experience collected at each iteration. The sub-sample factor is for the second-order optimization step, as detailed in Appendix A.
B.4 EXPLORATION HYPERPARAMETERS
For all tasks, fully-factored Gaussian distributions were used as dynamics models, where the means and variances of the distributions were the outputs of neural networks.
For the sparse MountainCar and sparse CartpoleSwingup tasks, the means and variances were parametrized by single hidden layer neural networks with 32 units. For all other tasks, the means and variances were parametrized by neural networks with two hidden layers of size 64 units each. All networks used tanh activation functions.
For all continuous control tasks except SwimmerGather, we used replay memories of size 5, 000, 000, and a KL-divergence step size of κ = 0.001. For SwimmerGather, the replay memory was the same size, but we set the KL-divergence size to κ = 0.005. For the Atari RAM domain tasks, we used replay memories of size 1, 000, 000, and a KL-divergence step size of κ = 0.01.
For all tasks except SwimmerGather and Venture, 5000 time steps of experience were sampled from the replay memory at each iteration of dynamics model learning to take a stochastic step on (11), and a sub-sample factor of 1 was used in the second-order step optimizer. For SwimmerGather and Venture, 10, 000 time steps of experience were sampled at each iteration, and a sub-sample factor of 0.5 was used in the optimizer.
For all continuous control tasks, the L2 penalty coefficient was set to α = 1. For the Atari RAM tasks except for Venture, it was set to α = 0.01. For Venture, it was set to α = 0.1.
For all continuous control tasks except SwimmerGather, η0 = 0.001. For SwimmerGather, η0 = 0.0001. For the Atari RAM tasks, η0 = 0.005.
C ANALYSIS OF SPEEDUP COMPARED TO VIME
In this section, we provide an analysis of the time cost incurred by using VIME or our bonuses, and derive the potential magnitude of speedup attained by our bonuses versus VIME.
At each iteration, bonuses based on learned dynamics models incur two primary costs:
• the time cost of fitting the dynamics model, • and the time cost of computing the rewards.
We denote the dynamics fitting costs for VIME and our methods as T fitvime and T fit ours. Although the Bayesian neural network dynamics model for VIME is more complex than our model, the fit times can work out to be similar depending on the choice of fitting algorithm. In our speed test, the fit times were nearly equivalent, but used different algorithms.
For the time cost of computing rewards, we first introduce the following quantities:
• n: the number of CPU threads available, • tf : time for a forward pass through the model, • tb: time for a backward pass through the model, • N : batch size (number of samples per iteration), • k: the number of forward passes that can be performed simultaneously.
For our method, the time cost of computing rewards is
T rewours = Ntf kn .
For VIME, things are more complex. Each reward requires the computation of a gradient through its model, which necessitates a forward and a backward pass. Because gradient calculations cannot be efficiently parallelized by any deep learning toolkits currently available3, each (s, a, s′) tuple requires its own forward/backward pass. As a result, the time cost of computing rewards for VIME is:
T rewvime = N(tf + tb)
n .
The speedup of our method over VIME is therefore
T fitvime + N(tf+tb) n
T fitours + Ntf kn
.
In the limit of large N , and with the approximation that tf ≈ tb, the speedup is a factor of ∼ 2k. 3If this is not correct, please contact the authors so that we can issue a correction! But to the best of our knowledge, this is currently true, at time of publication. | 1. What is the focus of the paper regarding deep reinforcement learning?
2. What are the proposed variants derived from the auxiliary model-learning process?
3. How does the reviewer assess the novelty and potential of the proposed approaches?
4. What are the limitations of the methods, particularly in specific domains like Go?
5. How does the reviewer compare the computation time and performance of the proposed method with VIME? | Review | Review
This paper explores the topic of intrinsic motivation in the context of deep RL. It proposes a couple of variants derived from an auxiliary model-learning process (prediction error, surprise and learning progress), and shows that those can help exploration on a number of continuous control tasks (and the Atari game “venture”, maybe).
Novelty: none of the proposed types of intrinsic motivation are novel, and it’s arguable whether the application to deep RL is novel (see e.g. Kompella et al 2012).
Potential: the idea of seeking out states where a transition model is uncertain is sensible, but also limited -- I would encourage the authors to also discuss the limitations. For example in a game like Go the transition model is trivially learned, so this approach would revert to random exploration. So other forms of learning progress or surprise derived from the agent’s competence instead might be more promising in the long run? See also Srivastava et al 2012 for further thoughts.
Computation time: I find the paper’s claimed superiority over VIME to be overblown: the gain seems to stem almost exclusively from a faster initialization, but have very similar per-step cost? So given that VIME is also performing very competitively, what arguments can you advance for your own method(s)? |
ICLR | Title
Domain Invariant Adversarial Learning
Abstract
The phenomenon of adversarial examples illustrates one of the most basic vulnerabilities of deep neural networks. Among the variety of techniques introduced to surmount this inherent weakness, adversarial training has emerged as the most effective strategy to achieve robustness. Typically, this is achieved by balancing robust and natural objectives. In this work, we aim to further optimize the tradeoff between robust and standard accuracy by enforcing a domain-invariant feature representation. We present a new adversarial training method, Domain Invariant Adversarial Learning (DIAL), which learns a feature representation that is both robust and domain invariant. DIAL uses a variant of Domain Adversarial Neural Network (DANN) on the natural domain and its corresponding adversarial domain. In the case where the source domain consists of natural examples and the target domain is the adversarially perturbed examples, our method learns a feature representation constrained not to discriminate between the natural and adversarial examples, and can therefore achieve a more robust representation. Our experiments indicate that our method improves both robustness and standard accuracy, when compared to other state-of-the-art adversarial training methods.
1 INTRODUCTION
Deep learning models have achieved impressive success on a wide range of challenging tasks. However, their performance was shown to be brittle to adversarial examples: small, imperceptible perturbations in the input that drastically alter the classification (Carlini & Wagner, 2017a;b; Goodfellow et al., 2014; Kurakin et al., 2016b; Moosavi-Dezfooli et al., 2016; Szegedy et al., 2013; Tramèr et al., 2017; Dong et al., 2018; Tabacof & Valle, 2016; Xie et al., 2019b; Rony et al., 2019). Designing reliable robust models has gained significant attention in the arms race against adversarial examples. Adversarial training (Szegedy et al., 2013; Goodfellow et al., 2014; Madry et al., 2017; Zhang et al., 2019b) has been suggested as one of the most effective approaches to defend against such examples, and can be described as solving the following min-max optimization problem:
min θ
E(x,y)∼D [
max x′:‖x′−x‖p≤
` (x′, y; θ) ] ,
where x′ is the -bounded perturbation in the `p norm and ` is the loss function. Different unrestricted attacks methods were also suggested, such as adversarial deformation, rotations, translation and more (Brown et al., 2018; Engstrom et al., 2018; Xiao et al., 2018; Alaifari et al., 2018; Gilmer et al., 2018).
The resulting min-max optimization problem can be hard to solve in general. Nevertheless, in the context of -bounded perturbations, the problem is often tractable in practice. The inner maximization is usually approximated by generating adversarial examples using projected gradient descent (PGD) (Kurakin et al., 2016a; Madry et al., 2017). A PGD adversary starts with randomly initialized perturbation and iteratively adjust the perturbation while projecting it back into the -ball:
xt+1 = ΠB (x0) (xt + α · sign(∇xt`(G(xt), y))),
where x0 is the natural example (with or without random noise), and ΠB (x) is the projection operator onto the -ball, G is the network, and α is the perturbation step size. As was shown by Athalye et al. (2018), PGD-based adversarial training was one of the few defenses that were not broken under strong attacks.
That said, the gap between robust and natural accuracy remains large for many tasks such as CIFAR10 (Krizhevsky et al., 2009) and ImageNet (Deng et al., 2009). Generally speaking, Tsipras et al. (2018) suggested that robustness may be at odds with natural accuracy, and usually the trade-off is inherent. Nevertheless, a growing body of work aimed to improve the standard PGD-based adversarial training introduced by Madry et al. (2017) in various ways such as improved adversarial loss functions and regularization techniques (Kannan et al., 2018; Wang et al., 2019b; Zhang et al., 2019b), semi-supervised approaches(Carmon et al., 2019; Uesato et al., 2019; Zhai et al., 2019), adversarial perturbations on model weights (Wu et al., 2020), utilizing out of distribution data (Lee et al., 2021) and many others. See related work for more details.
Our contribution. In this work, we propose a novel approach to regulating the tradeoff between robustness and natural accuracy. In contrast to the aforementioned works, our method enhances adversarial training by enforcing a feature representation that is invariant across the natural and adversarial domains. We incorporate the idea of Domain-Adversarial Neural Networks (DANN) (Ganin & Lempitsky, 2015; Ganin et al., 2016) directly into the adversarial training process. DANN is a representation learning approach for domain adaptation, designed to ensure that predictions are made based on invariant feature representation that cannot discriminate between source and target domains. Intuitively, the tasks of adversarial training and of domain-invariant representation have a similar goal: given a source (natural) domainX and a the target (adversarial) domainX ′, we hope to achieve g(X) ≈ g(X ′), where g a feature representation function (i.e., neural network). Achieving such a dual representation intuitively yields a more general feature representation.
In a comprehensive battery of experiments on MNIST (LeCun et al., 1998), SVHN (Netzer et al., 2011), CIFAR-10 (Krizhevsky et al., 2009) and CIFAR-100 (Krizhevsky et al., 2009) datasets, we demonstrate that by enforcing domain-invariant representation learning using DANN simultaneously with adversarial training, we gain a significant and consistent improvement in both robustness and natural accuracy compared to other state-of-the-art adversarial training methods, under AutoAttack (Croce & Hein, 2020) and various strong PGD (Madry et al., 2017), CW (Carlini & Wagner, 2017b) adversaries in white-box and black-box settings. Additionally, we evaluate our method using unforeseen "natural" corruptions (Hendrycks & Dietterich, 2018), unforeseen adversaries (e.g., `1, `2), transfer learning, and perform ablation studies. Finally, we offer a novel score function for quantifying the robust-natural accuracy tradeoff.
2 RELATED WORK
2.1 DEFENSE METHODS
A variety of theoretically principled (Cohen et al., 2019; Raghunathan et al., 2018a; Sinha et al., 2017; Raghunathan et al., 2018b; Wong et al., 2018; Wong & Kolter, 2018; Gowal et al., 2018)
and empirical defense approaches (Bai et al., 2021) were proposed to enhance robustness since the discovery of adversarial examples. Among the empirical defence techniques we can find adversarial regularization (Kurakin et al., 2016a; Madry et al., 2017; Zhang et al., 2019b; Wang et al., 2019b; Kannan et al., 2018), curriculum-based adversarial training (Cai et al., 2018; Zhang et al., 2020; Wang et al., 2019a), ensemble adversarial training (Tramèr et al., 2017; Pang et al., 2019; Yang et al., 2020), adversarial training with adaptive attack budget (Ding et al., 2018; Cheng et al., 2020), semi-supervised and unsupervised adversarial training (Carmon et al., 2019; Uesato et al., 2019; Zhai et al., 2019), robust self/pre-training (Jiang et al., 2020; Chen et al., 2020), efficient adversarial training (Shafahi et al., 2019; Wong et al., 2020; Andriushchenko & Flammarion, 2020; Zhang et al., 2019a), and many other techniques (Zhang & Wang, 2019; Goldblum et al., 2020; Pang et al., 2020b; Lee et al., 2020). In an additional research direction, researchers suggested to add new dedicated building blocks to the network architecture for improved robustness (Xie & Yuille, 2019; Xie et al., 2019a; Liu et al., 2020). Liu et al. (2020) hypothesised that different adversaries belong to different domains, and suggested gated batch normalization which is trained with multiple perturbation types. Others focused on searching robust architectures against adversarial examples (Guo et al., 2020).
Our work belongs to the the family of adversarial regularization techniques, for which we elaborate on common and best performing methods, and highlight the differences compared to our method.
Madry et al. (2017) proposed a technique, commonly referred to as Adversarial Training (AT), to minimize the cross entropy loss on adversarial examples generated by PGD. Zhang et al. (2019b) suggested to decompose the prediction error for adversarial examples as the sum of the natural error and boundary error, and provided a differentiable upper bounds on both terms. Motivated by this decomposition, they suggested a technique called TRADES that uses the Kullback-Leibler (KL) divergence as a regularization term that will push the decision boundary away from the data. Wang et al. (2019b) suggested that misclassified examples have a significant impact on final robustness, and proposed a technique called MART that differentiate between correctly classified and missclassified examples during training.
Another area of research aims at revealing the connection between the loss weight landscape and adversarial training (Prabhu et al., 2019; Yu et al., 2018; Wu et al., 2020). Specifically, Wu et al. (2020) identified a correlation between the flatness of weight loss landscape and robust generalization gap. They proposed the Adversarial Weight Perturbation (AWP) mechanism that is integrated into existing adversarial training methods. More recently, this approach was formalized from a theoretical standpoint by Tsai et al. (2021). However, this method forms a double-perturbation mechanism that perturbs both inputs and weights, which may incur a significant increase in calculation overhead. Nevertheless, we show that DIAL still improves state-of-the-art results when combined with AWP.
A related approach to ours, called ATDA, was presented by Song et al. (2018). They proposed to add several constrains to the loss function in order to enforce domain adaptation: correlation alignment and maximum mean discrepancy (Borgwardt et al., 2006; Sun & Saenko, 2016). While the objective is similar, using ideas from domain adaptation for learning better representation, we address it in two different ways. Our method fundamentally differs from Song et al. (2018) since we do not enforce domain adaptation by adding specific constrains to the loss function. Instead, we let the network learn the domain invariant representation directly during the optimization process, as suggested by Ganin & Lempitsky (2015); Ganin et al. (2016). Moreover, Song et al. (2018) focused mainly of FGSM. We empirically demonstrate the superiority of our method in Section 4. In a concurrent work, Qian et al. (2021) utilized the idea of exploiting local and global data information, and suggested to generate the adversarial examples by attacking an additional domain classifier.
2.2 ROBUST GENERALIZATION
Several works investigated the sample complexity requires the ensure adversarial generalization compared to the non-adversarial counterpart. Schmidt et al. (2018) has shown that there exists a distribution (mixture of Gaussians) where ensuring robust generalization necessarily requires more data than standard learning. This has been furthered investigated in a distribution-free models via the Rademacher Complexity, VC-dimension (Yin et al., 2019; Attias et al., 2019; Khim & Loh, 2018; Awasthi et al., 2020; Cullina et al., 2018; Montasser et al., 2019; Tsai et al., 2021) and additional settings (Diochnos et al., 2018; Carmon et al., 2019).
3 DOMAIN INVARIANT ADVERSARIAL LEARNING APPROACH
In this section, we introduce our Domain Invariant Adversarial Learning (DIAL) approach for adversarial training. The source domain is the natural dataset, and the target domain is generated using an adversarial attack on the natural domain. We aim to learn a model that has low error on the source (natural) task (e.g., classification) while ensuring that the internal representation cannot discriminate between the natural and adversarial domains. In this way, we enforce additional regularization on the feature representation, which enhances the robustness.
3.1 MODEL ARCHITECTURE AND REGULARIZED LOSS FUNCTION
Let us define the notation for our domain invariant robust architecture and loss. Let Gf (·; θf ) be the feature extractor neural network with parameters θf . Let Gy(·; θy) be the label classifier with parameters θy , and letGd(·; θd) the domain classifier with parameters θd. That is,Gy(Gf (·; θf ); θy) is essentially the standard model (e.g., wide residual network (Zagoruyko & Komodakis, 2016)), while in addition, we have a domain classification layer to enforce a domain invariant on the feature representation. An illustration of the architecture is presented in Figure 1.
Given a training set {(xi, yi)}ni=1, the natural loss is defined as:
Lynat = 1n ∑n i=1 CE(Gy(Gf (xi; θf ); θy), yi).
We consider two basic forms of the robust loss. One is the standard cross-entropy (CE) loss between the predicted probabilities and the actual label, which we refer to later as DIALCE. The second is the Kullback-Leibler (KL) divergence between the adversarial and natural model outputs (logits), as in Zhang et al. (2019b); Wang et al. (2019b), which we refer to as DIALKL.
LCErob = 1n ∑n i=1 CE(Gy(Gf (x ′ i; θf ); θy), yi),
LKLrob = 1n ∑n i=1 KL(Gf (x ′ i; θf ) ‖ Gf (xi; θf )),
where {(x′i, yi)}ni=1 are the generated corresponding adversarial examples. Next, we define source domain label di as 0 (for natural examples) and target domain label d ′
i as 1 (for adversarial examples). Then, the natural and adversarial domain losses are defined as:
Ldnat = 1n ∑n i=1 CE(Gd(Gf (xi; θf ); θd), di),
Ldadv = 1n ∑n i=1 CE(Gd(Gf (x ′ i; θf ); θd), d ′ i).
We can now define the full domain invariant robust loss:
DIALCE = Lynat + λLCErob − r(Ldnat + Ldadv),
DIALKL = Lynat + λLKLrob − r(Ldnat + Ldadv).
The goal is to minimize the loss on the natural and adversarial classification while maximizing the loss for the domains. The reversal-ratio hyper-parameter r is inserted into the network layers as a gradient reversal layer (Ganin & Lempitsky, 2015; Ganin et al., 2016) that leaves the input unchanged during forward propagation and reverses the gradient by multiplying it with a negative scalar during the back-propagation. The reversal-ratio parameter is initialized to a small value and is gradually increased to r, as the main objective converges. This enforces a domain-invariant representation as the training progress: a larger value enforces a higher fidelity to the domain. A comprehensive algorithm description can be found in Appendix A.
Modularity and semi-supervised extensions. We note that the domain classifier is a modular component that can be integrated into existing models for further improvements. Moreover, since the domain classifier does not require the class labels, additional unlabeled data can be leveraged in future work for improved results.
3.2 THE BENEFITS OF INVARIANT REPRESENTATION TO ADVERSARIAL EXAMPLES
The motivation behind the proposed method is to enforce an invariant feature representation to adversarial perturbations. Given a natural example x and its adversarial counterpart x′, if the domain classifier manages to distinguish between them, this means that the perturbation has induced a significant difference in the feature representation. We impose an additional loss on the natural and adversarial domains in order to discourage this behavior.
We demonstrate that the feature representation layer does not discriminate between natural and adversarial examples, namely Gf (x; θf ) ≈ Gf (x′; θf ). Figure 2 presents the scaled mean and standard deviation (std) of the absolute differences between the natural examples from test and their corresponding adversarial examples on different features from the feature representation layer. Smaller differences in the mean and std imply a higher domain invariance — and indeed, DIAL achieves near-zero differences almost across the board. Moreover, DIAL’s feature-level invariance almost consistently outperforms the naturally trained model and the model trained using standard adversarial training techniques (Madry et al., 2017). We provide additional features visualizations in Appendix H.
4 EXPERIMENTS
In this section we conduct comprehensive experiments to emphasise the effectiveness of DIAL, including evaluations under white-box and black-box settings, robustness to unforeseen adversaries, robustness to unforeseen corruptions, transfer learning, and ablation studies. Finally, we present a new measurement to test the balance between robustness and natural accuracy, which we named F1-robust score.
4.1 A CASE STUDY ON SVHN AND CIFAR-100
In the first part of our analysis, we conduct a case study experiment on two benchmark datasets: SVHN (Netzer et al., 2011) and CIFAR-100 Krizhevsky et al. (2009). We follow common experiment settings as in Rice et al. (2020); Wu et al. (2020). We used the PreAct ResNet-18 (He et al., 2016) architecture on which we integrate a domain classification layer. The adversarial training is done using 10-step PGD adversary with perturbation size of = 0.031 and a step size of 0.003 for SVHN and 0.007 for CIFAR-100. The batch size is 128, weight decay is 7e−4 and the model is trained for 100 epochs. For SVHN, the initial learinnig rate is set to 0.01 and decays by a factor of 10 after 55, 75 and 90 iteration. For CIFAR-100, the initial learning rate is set to 0.1 and decays by a factor of 10 after 75 and 90 iterations. Results are averaged over 3 restarts while omitting one standard deviation. As can be seen by the results in Tables 1 and 2, DIAL presents consistent improvement in robustness (e.g., 5.75% improved robustness on SVHN against AA) compared to the standard AT while also improving the natural accuracy. More results are presented in Appendix B.
4.2 BENCHMARKING THE STATE-OF-THE-ART ROBUSTNESS
In this part, we evaluate the performance of DIAL compared to other state-of-the-art methods on CIFAR-10. We follow the same experiment setups as in Madry et al. (2017); Wang et al. (2019b);
Zhang et al. (2019b). When experiment settings are not identical between tested methods, we choose the most commonly used settings, and apply it to all experiments. This way, we keep the comparison as fair as possible and avoid reporting changes in results which are caused by inconsistent experiment settings (Pang et al., 2020a). To show that our results are not caused because of what is referred to as obfuscated gradients (Athalye et al., 2018), we evaluate our method with same setup as in our defense model, under strong attacks (e.g., PGD1000) in both white-box, black-box settings, AutoAttack (Croce & Hein, 2020), unforeseen "natural" corruptions (Hendrycks & Dietterich, 2018), and unforeseen adversaries. To make sure that the reported improvements are not caused by adversarial overfitting (Rice et al., 2020), we report best robust results for each method on average of 3 restarts, while omitting one standard deviation. Additional results for CIFAR-10 as well as comprehansive evaluation on MNIST can be found in Appendix D and E.
CIFAR-10 setup. We use the wide residual network (WRN-34-10) (Zagoruyko & Komodakis, 2016) architecture. Sidelong this architecture, we integrate a domain classification layer. To generate the adversarial domain dataset, we use a perturbation size of = 0.031. We apply 10 of inner maximization iterations with perturbation step size of 0.007. Batch size is set to 128, weight decay is set to 7e−4, and the model is trained for 100 epochs. Similar to the other methods, the initial learning rate was set to 0.1, and decays by a factor of 10 at iteration 75 and 90. We also introduce a version of our method that incorporates the AWP double-perturbation mechanism, named DIAL-AWP. For black-box attacks, we used two types of surrogate models (1) surrogate model trained independently without adversarial training, with natural accuracy of 95.61% and (2) surrogate model trained using one of the adversarial training methods. Additional training details can be found in Appendix C.
White-box/Black-box robustness. As reported in Table 3, our method achieves better robustness over the other state-of-the-art methods with respect to the different attacks. Specifically, in white-box settings, we see that our method improves robustness over Madry et al. (2017) by more than 2%, and roughly 2% over TRADES using the common PGD20 attack while keeping higher natural accuracy. We also observe better natural accuracy of 1.65% over MART while also achieving better robustness over all attacks. Moreover, our method presents significant improvement of up to 15% compared to the the domain invariant method suggested by Song et al. (2018) (ATDA). When incorporating AWP, our method improves the TRADES-AWP variant by almost 2% Additional results are available in Appendix E. When tested on black-box settings, DIALCE presents a significant improvement of more than 4.4% over the second-best performing method, and up to 13%. In Table 4, we also present the black-box results when the source model is taken from one of the adversarially trained models. In addition to the improvement in black-box robustness, DIALCE also manages to achieve better clean accuracy of more than 4.5% over the second-best performing method. Moreover, based on the
auto-attack leader-board 1, our method achieves the 1st place among models without additional data using the WRN-34-10 architecture.
4.2.1 ROBUSTNESS TO UNFORESEEN ATTACKS AND CORRUPTIONS
Unforeseen Adversaries. To further demonstrate the effectiveness of our approach, we test our method against various adversaries that were not used during the training process. We attack the model under the white-box settings with `2-PGD, `1-PGD, `∞-DeepFool and `2-DeepFool (Moosavi-Dezfooli et al., 2016) adversaries using Foolbox (Rauber et al., 2017). We applied commonly used attack budget (perturbation for PGD adversaries and overshot for DeepFool adversaries) with 20 iterations for the PGD adversaries and 50 for the DeepFool adversaries. Results are presented in Table 5. As can be seen by the results, our approach gains an improvement of up to 4.73% over the second best method under the various attack types and average improvement of 3.7% over all threat models.
Unforeseen Corruptions. We further demonstrate that our method consistently holds against unforeseen "natural" corruptions, consists of 18 unforeseen diverse corruption types proposed by Hendrycks & Dietterich (2018) on CIFAR-10, which we refer to as CIFAR10-C. The CIFAR10-C benchmark covers noise, blur, weather, and digital categories. As can be shown in Figure 3, our method gains a significant and consistent improvement over all the other methods. Our approach leads to an average improvement of 4.7% with minimum improvement of 3.5% and maximum improvement of 5.9% compared to the second best method over all unforeseen attacks. See Appendix F for the full experiment results.
4.2.2 TRANSFER LEARNING
Recent works (Salman et al., 2020; Utrera et al., 2020) suggested that robust models transfer better on standard downstream classification tasks. In Table 6 we demonstrate the advantage of our method when applied for transfer learning across CIFAR10 and CIFAR100 using the common linear evaluation protocol. see Appendix G for detailed experiment settings.
4.2.3 ABLATION STUDIES
In this part, we conduct ablation studies to further investigate the contribution of the additional domain head component introduced in our method. Experiment configuration are as in 4.2, and robust accuracy is reported on white-box PGD20. We use the CIFAR-10 dataset and train WRN-3410. We remove the domain head from both DIALKL and DIALCE (equivalent to r = 0) and report the natural and robust accuracy. We perform 3 random restarts and omit one standard deviation from the results. Results are presented in Figure 4. Both DIAL variants exhibits stable improvements on both natural accuracy and robust accuracy. DIALCE and DIALKL present an improvement of 1.82% and 0.33% on natural accuracy and 2.5% and 1.87% on robust accuracy, respectively.
4.2.4 VISUALIZING DIAL
To further illustrate our method, we visualize the model outputs using the different methods under natural test data and adversarial test data generated using PGD20 white-box attack with step size 0.003 and = 0.031 on CIFAR-10. Figure 5 shows the embedding received after applying t-SNE (Van der Maaten & Hinton, 2008) with two components on the model output for our method and for TRADES. DIAL seems to preserve strong separation between classes on both natural test data and adversarial test data. Additional illustrations for the other methods are attached in Appendix H.
4.3 BALANCED MEASUREMENT FOR ROBUST AND NATURAL ACCURACY
One of the goals of our method is to better balance between robust and natural accuracy under a given model. For a balanced metric, we adopt the idea of F1-score, which is the harmonic mean between the precision and recall. However, rather than using precision and recall, we measure the F1-score between robustness and natural accuracy, using a measure we call the F1-robust score.
F1-robust = true_robust
true_robust + 12 (false_robust + false_natural) ,
where true_robust are the adversarial examples that were correctly classified, false_robust are the adversarial examples that where miss-classified, and false_natural are the natural examples that were miss-classified. We tested the proposed F1-robust score using PGD20 on CIFAR-10 dataset in whitebox and black-box settings. Results are presented in Table 7 and show that our method achieves the best F1-robust score in both settings, which supports our findings from previous sections.
5 CONCLUSION
In this paper, we investigated the hypothesis that domain invariant representation can be beneficial for robust learning. With this idea in mind, we proposed a new adversarial learning method, called Domain Invariant Adversarial Learning (DIAL) that incorporates Domain Adversarial Neural Network into the adversarial training process. The proposed method is generic and can be combined with any network architecture in a wide range of tasks. Our evaluation process included strong adversaries , unforeseen adversaries , unforeseen corruptions, transfer learning tasks, and ablation studies. Using the extensive empirical analysis, we demonstrate the significant and consistent improvement obtained by DIAL in both robustness and natural accuracy compared to other defence methods on benchmark datasets.
ETHICS STATEMENT
We proposed DIAL to improve models’ robustness against adversarial attacks. We hope that it will help in building more secure models for real-world applications. DIAL is comparable to the stateof-the-art methods we tested in terms of training times and other resources. That said, this work is not without limitations: adversarial training is still a computationally expensive procedure that requires extra computations compared to standard training, with the concomitant environmental costs. Even though our method introduced better standard accuracy, adversarial training still degrades the standard accuracy. Moreover, models are trained to be robust using well known threat models such as the bounded `p norms. However, once a model is deployed, we cannot control the type of attacks it faces from sophisticated adversaries. Thus, the general problem is still very far from being fully solved.
REPRODUCIBILITY STATEMENT
In this paper, great efforts were made to ensure that comparison is fair, and all necessary information for reproducibility is present. Section 4 and Appendix B and C contains all experiment settings for SVHN, CIFAR-10 and CIFAR-100 experiments. Appendix D contains all experiment details and results for MNIST experiments. Appendix G contains experiment settings for the transfer learning experiment. In the supplementary material, we provided the source code to train and evaluate DIAL.
A DOMAIN INVARIANT ADVERSARIAL LEARNING ALGORITHM
Algorithm 1 describes a pseudo-code of our proposed DIALCE variant. As can be seen, a target domain batch is not given in advance as with standard domain-adaptation task. Instead, for each natural batch we generate a target batch using adversarial training. The loss function is composed of natural and adversarial losses with respect to the main task (e.g., classification), and from natural and adversarial domain losses. By maximizing the losses on the domain we aim to learn a feature representation which is invariant to the natural and adversarial domain, and therefore more robust.
Algorithm 1: Domain Invariant Adversarial Learning Input: Source data S = {(xi, yi)}ni=1 and network architecture Gf , Gy, Gd Parameters: Batch size m, perturbation size , pgd attack step size τ , adversarial trade-off λ,
initial reversal ratio r, and step size α Initialize: Y0 and Y1 source and target domain vectors filled with 0 and 1 respectively Output: Robust network G = (Gf , Gy, Gd) parameterized by θ̂ = (θf , θy, θd) respectively
B ADDITIONAL RESULTS ON CIFAR-100 AND SVHN
White-box Black-Box
Defense Model Natural PGD20 PGD100 PGD1000 CW∞ PGD20 PGD100 PGD1000 CW∞ AA
TRADES 90.35 57.10 54.13 54.08 52.19 86.89 86.73 86.57 86.70 49.5 DIALKL (Ours) 90.66 58.91 55.30 55.11 53.67 87.62 87.52 87.41 87.63 51.00 DIALCE (Ours) 92.88 55.26 50.82 50.54 49.66 89.12 89.01 88.74 89.10 46.52
C CIFAR-10 ADDITIONAL EXPERIMENTAL SETUP DETAILS
Additional defence setup. For being consistent with other methods, the natural images are padded with 4-pixel padding with 32-random crop and random horizontal flip. Furthermore, all methods are trained using SGD with momentum 0.9. For DIALKL, we balance the robust loss with λ = 6 and the domains loss with r = 4. For DIALCE, we balance the robust loss with λ = 1 and the domains loss with r = 2. For DIAL-AWP, we used the same learning rate schedule used in Wu et al. (2020), where the initial 0.1 learning rate decays by a factor of 10 after 100 and 150 iterations.
D BENCHMARKING THE STATE-OF-THE-ART ON MNIST
Defence setup. We use the same CNN architecture as used in Zhang et al. (2019b) which consists of four convolutional layers and three fully-connected layers. Sidelong this architecture, we integrate a domain classification layer. To generate the adversarial domain dataset, we use a perturbation size of = 0.3. We apply 40 iterations of inner maximization with perturbation step size of 0.01. Batch size is set to 128 and the model is trained for 100 epochs. Similar to the other methods, the initial learning rate was set to 0.01, and decays by a factor of 10 after 55 iterations, 75 and 90 iterations. All the models in the experiment are trained using SGD with momentum 0.9. For our method, we balance the robust loss with λ = 6 and the domains loss with r = 0.1.
White-box/Black-box robustness. We evaluate all defense models using PGD40, PGD100, PGD1000 and CW∞ (`∞ version of Carlini & Wagner (2017b) attack optimized by PGD-100) with step size 0.01. We constrain all attacks by the same perturbation = 0.3. For our black-box setting, we use a naturally trained surrogate model with natural accuracy of 99.51%. As reported in Table 10, our method achieves improved robustness over the other methods under the different attack types, while preserving the same level of natural accuracy, and even surpassing the naturally trained model. We should note that in general, the improvement margin on MNIST is more moderate compared to CIFAR-10, since MNIST is an easier task than CIFAR-10 and the robustness range is already high to begin with. Additional results are available in Appendix E.
E ADDITIONAL RESULTS ON MNIST AND CIFAR-10
In Table 11 we present additional results using the PGD1000 threat model. We use step size of 0.003 and constrain the attacks by the same perturbation = 0.031. Table 12 presents a comparison of our method combined with AWP to other the variants of AWP that were presented in Wu et al. (2020). In addition, in Table 13 we add the F1-robust scores for different variants of AWP.
F EXTENED RESULTS ON UNFORESEEN CORRUPTIONS
We present full accuracy results against unforeseen corruptions in Tables 14 and 15. We also visualize it in Figure 6.
Table 14: Accuracy (%) against unforeseen corruptions.
Defense Model brightness defocus blur fog glass blur jpeg compression motion blur saturate snow speckle noise
TRADES 82.63 80.04 60.19 78.00 82.81 76.49 81.53 80.68 80.14 MART 80.76 78.62 56.78 76.60 81.26 74.58 80.74 78.22 79.42 AT 83.30 80.42 60.22 77.90 82.73 76.64 82.31 80.37 80.74 ATDA 72.67 69.36 45.52 64.88 73.22 63.47 72.07 68.76 72.27 DIAL (Ours) 87.14 84.84 66.08 81.82 87.07 81.20 86.45 84.18 84.94
Table 15: Accuracy (%) against unforeseen corruptions.
G TRANSFER LEARNING SETTINGS
The models used are the same models from previous experiments. We follow the common procedure of “fixed-feature” setting, where only a linear layer on top of the pre-trained network is trained. We train a linear classifier on CIFAR-100 on top of the pre-trained network which was trained on CIFAR-10. We also train a linear classifier on CIFAR-10 on top of the pre-trained nwork which was trained on CIFAR-100. We train the linear classifier for 100 epochs, and an initial learning rate of 0.1 which is decayed by a factor of 10 at epochs 50 and 75. We used SGD optimizer with momentum 0.9.
H EXTENDED VISUALIZATIONS
In Figure 8, we provide additional visualizations of the different adversarial training methods presented above. We visualize the models outputs using t-SNE with two components on the natural test data and the corresponding adversarial test data generated using PGD20 white-box attack with step size 0.003 and = 0.031 on CIFAR-10.
In Figure 7 we visualize statistical differences between natural and adversarial examples in the feature representation layer. Specifically, we show the differences in mean and std on thirty random feature values from the feature representation layer as we pass through a network the natural test examples and their corresponding adversarial examples. We present the results on same network architecture (WRN-34-10), trained using three different training procedures: naturally trained network, network trained using standard adversarial training (AT) Madry et al. (2017), and DIAL on the CIFAR-10 dataset. When the statistical characteristics of each feature differ from each other, it implies that the features layer is less domain invariant. That is, smaller differences in mean/std yields a better invariance to adversarial examples. One can observe that for DIAL, there is almost no differences between the mean/std of natural examples and their corresponding adversarial examples. Moreover, for the vast majority of the features, DIAL present smaller differences compared to the naturally trained model and the model trained with standard adversarial training. Best viewed in colors. | 1. What is the main contribution of the paper in terms of adversarial training?
2. How does the proposed method differ from other domain adaptation approaches?
3. Can you explain the idea behind requiring the network to extract similar representation distributions for clean and attacked data?
4. How effective is the proposed approach in improving the robustness of the network against unforeseen adversaries and corruptions?
5. What are the limitations of the paper regarding the experimental setup and the choice of datasets?
6. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Review | Summary Of The Paper
The paper describes an adversarial training approach that, in addition to the commonly used robustness loss, requires the network to extract similar representation distributions for clean and attacked data. The proposed method is inspired by domain adaptation approaches that require a model to extract domain invariant/agnostic features from two domains. In the context of this paper, the two domains are the clean and adversarially perturbed images, and the network is required to extract domain invariant representation. To achieve domain invariance, the authors propose a domain classifier ( i.e., an adversarial network) that discriminates the representations from clean and attacked images. The feature extractor is then required to generate features that fool the domain classifier. The authors then provide extensive experiments on small-scale benchmark datasets (SVHN, CIFAR10, CIFAR 100, and MNIST in the supplementary material ) to show the robustness of their proposed approach against the state-of-the-art robustness methods under white-box and black-box attacks. The authors show that their proposed method provides: 1) higher accuracy on attacked data (more robustness), and 2) higher accuracy on clean data, closing the gap between the performance on clean and attacked data. In addition, the paper provides insightful experiments on robustness to unforeseen adversaries, robustness to unforeseen corruptions, transfer learning, and ablation studies.
Review
Strengths:
The idea is simple, yet it leads to significantly more robust networks
The paper is well written, and it is easy to follow
While the experiments are only carried out on smaller scale datasets, they are thorough, and they support the claims of the authors
Weaknesses:
I don't see major weaknesses in the paper. Below are some minor points.
DIAL-AWP comes out of the blue in Table 3. For the sake of consistency, I suggest adding it to Tables 1 and 2 as well and providing the formulation (for self-sufficiency).
The TSNE plots in Figure 5 for clean and perturbed distributions seem to have been calculated separately, which means that we are effectively looking at two different embedding spaces when we look at (a) and (b). I suggest that the author append the clean and perturbed representations, calculate the TSNE embedding jointly, and then plot them into their corresponding plots.
Additional Comments/Questions:
In your KL robustness loss you have,
L
r
o
b
K
L
=
1
n
∑
i
K
L
(
G
f
(
x
i
′
;
θ
f
)
|
|
G
f
(
x
i
;
θ
f
)
)
My understanding is that
G
f
is your feature extractor, and
G
f
(
x
i
′
)
,
G
f
(
x
i
)
∈
R
d
are not probability vectors, this is while
K
L
(
⋅
|
|
⋅
)
is a dissimilarity measure defined only for probability distributions. Could you comment on this? Also, wouldn't a simple MSE work fine here?
This might be a matter of style, but it could be helpful to add equation numbers to your equations.
Typos:
Page 3 second paragraph: "belongs to the the family"
Page 5 second to the last paragraph: "the initial learinnig" |
ICLR | Title
Domain Invariant Adversarial Learning
Abstract
The phenomenon of adversarial examples illustrates one of the most basic vulnerabilities of deep neural networks. Among the variety of techniques introduced to surmount this inherent weakness, adversarial training has emerged as the most effective strategy to achieve robustness. Typically, this is achieved by balancing robust and natural objectives. In this work, we aim to further optimize the tradeoff between robust and standard accuracy by enforcing a domain-invariant feature representation. We present a new adversarial training method, Domain Invariant Adversarial Learning (DIAL), which learns a feature representation that is both robust and domain invariant. DIAL uses a variant of Domain Adversarial Neural Network (DANN) on the natural domain and its corresponding adversarial domain. In the case where the source domain consists of natural examples and the target domain is the adversarially perturbed examples, our method learns a feature representation constrained not to discriminate between the natural and adversarial examples, and can therefore achieve a more robust representation. Our experiments indicate that our method improves both robustness and standard accuracy, when compared to other state-of-the-art adversarial training methods.
1 INTRODUCTION
Deep learning models have achieved impressive success on a wide range of challenging tasks. However, their performance was shown to be brittle to adversarial examples: small, imperceptible perturbations in the input that drastically alter the classification (Carlini & Wagner, 2017a;b; Goodfellow et al., 2014; Kurakin et al., 2016b; Moosavi-Dezfooli et al., 2016; Szegedy et al., 2013; Tramèr et al., 2017; Dong et al., 2018; Tabacof & Valle, 2016; Xie et al., 2019b; Rony et al., 2019). Designing reliable robust models has gained significant attention in the arms race against adversarial examples. Adversarial training (Szegedy et al., 2013; Goodfellow et al., 2014; Madry et al., 2017; Zhang et al., 2019b) has been suggested as one of the most effective approaches to defend against such examples, and can be described as solving the following min-max optimization problem:
min θ
E(x,y)∼D [
max x′:‖x′−x‖p≤
` (x′, y; θ) ] ,
where x′ is the -bounded perturbation in the `p norm and ` is the loss function. Different unrestricted attacks methods were also suggested, such as adversarial deformation, rotations, translation and more (Brown et al., 2018; Engstrom et al., 2018; Xiao et al., 2018; Alaifari et al., 2018; Gilmer et al., 2018).
The resulting min-max optimization problem can be hard to solve in general. Nevertheless, in the context of -bounded perturbations, the problem is often tractable in practice. The inner maximization is usually approximated by generating adversarial examples using projected gradient descent (PGD) (Kurakin et al., 2016a; Madry et al., 2017). A PGD adversary starts with randomly initialized perturbation and iteratively adjust the perturbation while projecting it back into the -ball:
xt+1 = ΠB (x0) (xt + α · sign(∇xt`(G(xt), y))),
where x0 is the natural example (with or without random noise), and ΠB (x) is the projection operator onto the -ball, G is the network, and α is the perturbation step size. As was shown by Athalye et al. (2018), PGD-based adversarial training was one of the few defenses that were not broken under strong attacks.
That said, the gap between robust and natural accuracy remains large for many tasks such as CIFAR10 (Krizhevsky et al., 2009) and ImageNet (Deng et al., 2009). Generally speaking, Tsipras et al. (2018) suggested that robustness may be at odds with natural accuracy, and usually the trade-off is inherent. Nevertheless, a growing body of work aimed to improve the standard PGD-based adversarial training introduced by Madry et al. (2017) in various ways such as improved adversarial loss functions and regularization techniques (Kannan et al., 2018; Wang et al., 2019b; Zhang et al., 2019b), semi-supervised approaches(Carmon et al., 2019; Uesato et al., 2019; Zhai et al., 2019), adversarial perturbations on model weights (Wu et al., 2020), utilizing out of distribution data (Lee et al., 2021) and many others. See related work for more details.
Our contribution. In this work, we propose a novel approach to regulating the tradeoff between robustness and natural accuracy. In contrast to the aforementioned works, our method enhances adversarial training by enforcing a feature representation that is invariant across the natural and adversarial domains. We incorporate the idea of Domain-Adversarial Neural Networks (DANN) (Ganin & Lempitsky, 2015; Ganin et al., 2016) directly into the adversarial training process. DANN is a representation learning approach for domain adaptation, designed to ensure that predictions are made based on invariant feature representation that cannot discriminate between source and target domains. Intuitively, the tasks of adversarial training and of domain-invariant representation have a similar goal: given a source (natural) domainX and a the target (adversarial) domainX ′, we hope to achieve g(X) ≈ g(X ′), where g a feature representation function (i.e., neural network). Achieving such a dual representation intuitively yields a more general feature representation.
In a comprehensive battery of experiments on MNIST (LeCun et al., 1998), SVHN (Netzer et al., 2011), CIFAR-10 (Krizhevsky et al., 2009) and CIFAR-100 (Krizhevsky et al., 2009) datasets, we demonstrate that by enforcing domain-invariant representation learning using DANN simultaneously with adversarial training, we gain a significant and consistent improvement in both robustness and natural accuracy compared to other state-of-the-art adversarial training methods, under AutoAttack (Croce & Hein, 2020) and various strong PGD (Madry et al., 2017), CW (Carlini & Wagner, 2017b) adversaries in white-box and black-box settings. Additionally, we evaluate our method using unforeseen "natural" corruptions (Hendrycks & Dietterich, 2018), unforeseen adversaries (e.g., `1, `2), transfer learning, and perform ablation studies. Finally, we offer a novel score function for quantifying the robust-natural accuracy tradeoff.
2 RELATED WORK
2.1 DEFENSE METHODS
A variety of theoretically principled (Cohen et al., 2019; Raghunathan et al., 2018a; Sinha et al., 2017; Raghunathan et al., 2018b; Wong et al., 2018; Wong & Kolter, 2018; Gowal et al., 2018)
and empirical defense approaches (Bai et al., 2021) were proposed to enhance robustness since the discovery of adversarial examples. Among the empirical defence techniques we can find adversarial regularization (Kurakin et al., 2016a; Madry et al., 2017; Zhang et al., 2019b; Wang et al., 2019b; Kannan et al., 2018), curriculum-based adversarial training (Cai et al., 2018; Zhang et al., 2020; Wang et al., 2019a), ensemble adversarial training (Tramèr et al., 2017; Pang et al., 2019; Yang et al., 2020), adversarial training with adaptive attack budget (Ding et al., 2018; Cheng et al., 2020), semi-supervised and unsupervised adversarial training (Carmon et al., 2019; Uesato et al., 2019; Zhai et al., 2019), robust self/pre-training (Jiang et al., 2020; Chen et al., 2020), efficient adversarial training (Shafahi et al., 2019; Wong et al., 2020; Andriushchenko & Flammarion, 2020; Zhang et al., 2019a), and many other techniques (Zhang & Wang, 2019; Goldblum et al., 2020; Pang et al., 2020b; Lee et al., 2020). In an additional research direction, researchers suggested to add new dedicated building blocks to the network architecture for improved robustness (Xie & Yuille, 2019; Xie et al., 2019a; Liu et al., 2020). Liu et al. (2020) hypothesised that different adversaries belong to different domains, and suggested gated batch normalization which is trained with multiple perturbation types. Others focused on searching robust architectures against adversarial examples (Guo et al., 2020).
Our work belongs to the the family of adversarial regularization techniques, for which we elaborate on common and best performing methods, and highlight the differences compared to our method.
Madry et al. (2017) proposed a technique, commonly referred to as Adversarial Training (AT), to minimize the cross entropy loss on adversarial examples generated by PGD. Zhang et al. (2019b) suggested to decompose the prediction error for adversarial examples as the sum of the natural error and boundary error, and provided a differentiable upper bounds on both terms. Motivated by this decomposition, they suggested a technique called TRADES that uses the Kullback-Leibler (KL) divergence as a regularization term that will push the decision boundary away from the data. Wang et al. (2019b) suggested that misclassified examples have a significant impact on final robustness, and proposed a technique called MART that differentiate between correctly classified and missclassified examples during training.
Another area of research aims at revealing the connection between the loss weight landscape and adversarial training (Prabhu et al., 2019; Yu et al., 2018; Wu et al., 2020). Specifically, Wu et al. (2020) identified a correlation between the flatness of weight loss landscape and robust generalization gap. They proposed the Adversarial Weight Perturbation (AWP) mechanism that is integrated into existing adversarial training methods. More recently, this approach was formalized from a theoretical standpoint by Tsai et al. (2021). However, this method forms a double-perturbation mechanism that perturbs both inputs and weights, which may incur a significant increase in calculation overhead. Nevertheless, we show that DIAL still improves state-of-the-art results when combined with AWP.
A related approach to ours, called ATDA, was presented by Song et al. (2018). They proposed to add several constrains to the loss function in order to enforce domain adaptation: correlation alignment and maximum mean discrepancy (Borgwardt et al., 2006; Sun & Saenko, 2016). While the objective is similar, using ideas from domain adaptation for learning better representation, we address it in two different ways. Our method fundamentally differs from Song et al. (2018) since we do not enforce domain adaptation by adding specific constrains to the loss function. Instead, we let the network learn the domain invariant representation directly during the optimization process, as suggested by Ganin & Lempitsky (2015); Ganin et al. (2016). Moreover, Song et al. (2018) focused mainly of FGSM. We empirically demonstrate the superiority of our method in Section 4. In a concurrent work, Qian et al. (2021) utilized the idea of exploiting local and global data information, and suggested to generate the adversarial examples by attacking an additional domain classifier.
2.2 ROBUST GENERALIZATION
Several works investigated the sample complexity requires the ensure adversarial generalization compared to the non-adversarial counterpart. Schmidt et al. (2018) has shown that there exists a distribution (mixture of Gaussians) where ensuring robust generalization necessarily requires more data than standard learning. This has been furthered investigated in a distribution-free models via the Rademacher Complexity, VC-dimension (Yin et al., 2019; Attias et al., 2019; Khim & Loh, 2018; Awasthi et al., 2020; Cullina et al., 2018; Montasser et al., 2019; Tsai et al., 2021) and additional settings (Diochnos et al., 2018; Carmon et al., 2019).
3 DOMAIN INVARIANT ADVERSARIAL LEARNING APPROACH
In this section, we introduce our Domain Invariant Adversarial Learning (DIAL) approach for adversarial training. The source domain is the natural dataset, and the target domain is generated using an adversarial attack on the natural domain. We aim to learn a model that has low error on the source (natural) task (e.g., classification) while ensuring that the internal representation cannot discriminate between the natural and adversarial domains. In this way, we enforce additional regularization on the feature representation, which enhances the robustness.
3.1 MODEL ARCHITECTURE AND REGULARIZED LOSS FUNCTION
Let us define the notation for our domain invariant robust architecture and loss. Let Gf (·; θf ) be the feature extractor neural network with parameters θf . Let Gy(·; θy) be the label classifier with parameters θy , and letGd(·; θd) the domain classifier with parameters θd. That is,Gy(Gf (·; θf ); θy) is essentially the standard model (e.g., wide residual network (Zagoruyko & Komodakis, 2016)), while in addition, we have a domain classification layer to enforce a domain invariant on the feature representation. An illustration of the architecture is presented in Figure 1.
Given a training set {(xi, yi)}ni=1, the natural loss is defined as:
Lynat = 1n ∑n i=1 CE(Gy(Gf (xi; θf ); θy), yi).
We consider two basic forms of the robust loss. One is the standard cross-entropy (CE) loss between the predicted probabilities and the actual label, which we refer to later as DIALCE. The second is the Kullback-Leibler (KL) divergence between the adversarial and natural model outputs (logits), as in Zhang et al. (2019b); Wang et al. (2019b), which we refer to as DIALKL.
LCErob = 1n ∑n i=1 CE(Gy(Gf (x ′ i; θf ); θy), yi),
LKLrob = 1n ∑n i=1 KL(Gf (x ′ i; θf ) ‖ Gf (xi; θf )),
where {(x′i, yi)}ni=1 are the generated corresponding adversarial examples. Next, we define source domain label di as 0 (for natural examples) and target domain label d ′
i as 1 (for adversarial examples). Then, the natural and adversarial domain losses are defined as:
Ldnat = 1n ∑n i=1 CE(Gd(Gf (xi; θf ); θd), di),
Ldadv = 1n ∑n i=1 CE(Gd(Gf (x ′ i; θf ); θd), d ′ i).
We can now define the full domain invariant robust loss:
DIALCE = Lynat + λLCErob − r(Ldnat + Ldadv),
DIALKL = Lynat + λLKLrob − r(Ldnat + Ldadv).
The goal is to minimize the loss on the natural and adversarial classification while maximizing the loss for the domains. The reversal-ratio hyper-parameter r is inserted into the network layers as a gradient reversal layer (Ganin & Lempitsky, 2015; Ganin et al., 2016) that leaves the input unchanged during forward propagation and reverses the gradient by multiplying it with a negative scalar during the back-propagation. The reversal-ratio parameter is initialized to a small value and is gradually increased to r, as the main objective converges. This enforces a domain-invariant representation as the training progress: a larger value enforces a higher fidelity to the domain. A comprehensive algorithm description can be found in Appendix A.
Modularity and semi-supervised extensions. We note that the domain classifier is a modular component that can be integrated into existing models for further improvements. Moreover, since the domain classifier does not require the class labels, additional unlabeled data can be leveraged in future work for improved results.
3.2 THE BENEFITS OF INVARIANT REPRESENTATION TO ADVERSARIAL EXAMPLES
The motivation behind the proposed method is to enforce an invariant feature representation to adversarial perturbations. Given a natural example x and its adversarial counterpart x′, if the domain classifier manages to distinguish between them, this means that the perturbation has induced a significant difference in the feature representation. We impose an additional loss on the natural and adversarial domains in order to discourage this behavior.
We demonstrate that the feature representation layer does not discriminate between natural and adversarial examples, namely Gf (x; θf ) ≈ Gf (x′; θf ). Figure 2 presents the scaled mean and standard deviation (std) of the absolute differences between the natural examples from test and their corresponding adversarial examples on different features from the feature representation layer. Smaller differences in the mean and std imply a higher domain invariance — and indeed, DIAL achieves near-zero differences almost across the board. Moreover, DIAL’s feature-level invariance almost consistently outperforms the naturally trained model and the model trained using standard adversarial training techniques (Madry et al., 2017). We provide additional features visualizations in Appendix H.
4 EXPERIMENTS
In this section we conduct comprehensive experiments to emphasise the effectiveness of DIAL, including evaluations under white-box and black-box settings, robustness to unforeseen adversaries, robustness to unforeseen corruptions, transfer learning, and ablation studies. Finally, we present a new measurement to test the balance between robustness and natural accuracy, which we named F1-robust score.
4.1 A CASE STUDY ON SVHN AND CIFAR-100
In the first part of our analysis, we conduct a case study experiment on two benchmark datasets: SVHN (Netzer et al., 2011) and CIFAR-100 Krizhevsky et al. (2009). We follow common experiment settings as in Rice et al. (2020); Wu et al. (2020). We used the PreAct ResNet-18 (He et al., 2016) architecture on which we integrate a domain classification layer. The adversarial training is done using 10-step PGD adversary with perturbation size of = 0.031 and a step size of 0.003 for SVHN and 0.007 for CIFAR-100. The batch size is 128, weight decay is 7e−4 and the model is trained for 100 epochs. For SVHN, the initial learinnig rate is set to 0.01 and decays by a factor of 10 after 55, 75 and 90 iteration. For CIFAR-100, the initial learning rate is set to 0.1 and decays by a factor of 10 after 75 and 90 iterations. Results are averaged over 3 restarts while omitting one standard deviation. As can be seen by the results in Tables 1 and 2, DIAL presents consistent improvement in robustness (e.g., 5.75% improved robustness on SVHN against AA) compared to the standard AT while also improving the natural accuracy. More results are presented in Appendix B.
4.2 BENCHMARKING THE STATE-OF-THE-ART ROBUSTNESS
In this part, we evaluate the performance of DIAL compared to other state-of-the-art methods on CIFAR-10. We follow the same experiment setups as in Madry et al. (2017); Wang et al. (2019b);
Zhang et al. (2019b). When experiment settings are not identical between tested methods, we choose the most commonly used settings, and apply it to all experiments. This way, we keep the comparison as fair as possible and avoid reporting changes in results which are caused by inconsistent experiment settings (Pang et al., 2020a). To show that our results are not caused because of what is referred to as obfuscated gradients (Athalye et al., 2018), we evaluate our method with same setup as in our defense model, under strong attacks (e.g., PGD1000) in both white-box, black-box settings, AutoAttack (Croce & Hein, 2020), unforeseen "natural" corruptions (Hendrycks & Dietterich, 2018), and unforeseen adversaries. To make sure that the reported improvements are not caused by adversarial overfitting (Rice et al., 2020), we report best robust results for each method on average of 3 restarts, while omitting one standard deviation. Additional results for CIFAR-10 as well as comprehansive evaluation on MNIST can be found in Appendix D and E.
CIFAR-10 setup. We use the wide residual network (WRN-34-10) (Zagoruyko & Komodakis, 2016) architecture. Sidelong this architecture, we integrate a domain classification layer. To generate the adversarial domain dataset, we use a perturbation size of = 0.031. We apply 10 of inner maximization iterations with perturbation step size of 0.007. Batch size is set to 128, weight decay is set to 7e−4, and the model is trained for 100 epochs. Similar to the other methods, the initial learning rate was set to 0.1, and decays by a factor of 10 at iteration 75 and 90. We also introduce a version of our method that incorporates the AWP double-perturbation mechanism, named DIAL-AWP. For black-box attacks, we used two types of surrogate models (1) surrogate model trained independently without adversarial training, with natural accuracy of 95.61% and (2) surrogate model trained using one of the adversarial training methods. Additional training details can be found in Appendix C.
White-box/Black-box robustness. As reported in Table 3, our method achieves better robustness over the other state-of-the-art methods with respect to the different attacks. Specifically, in white-box settings, we see that our method improves robustness over Madry et al. (2017) by more than 2%, and roughly 2% over TRADES using the common PGD20 attack while keeping higher natural accuracy. We also observe better natural accuracy of 1.65% over MART while also achieving better robustness over all attacks. Moreover, our method presents significant improvement of up to 15% compared to the the domain invariant method suggested by Song et al. (2018) (ATDA). When incorporating AWP, our method improves the TRADES-AWP variant by almost 2% Additional results are available in Appendix E. When tested on black-box settings, DIALCE presents a significant improvement of more than 4.4% over the second-best performing method, and up to 13%. In Table 4, we also present the black-box results when the source model is taken from one of the adversarially trained models. In addition to the improvement in black-box robustness, DIALCE also manages to achieve better clean accuracy of more than 4.5% over the second-best performing method. Moreover, based on the
auto-attack leader-board 1, our method achieves the 1st place among models without additional data using the WRN-34-10 architecture.
4.2.1 ROBUSTNESS TO UNFORESEEN ATTACKS AND CORRUPTIONS
Unforeseen Adversaries. To further demonstrate the effectiveness of our approach, we test our method against various adversaries that were not used during the training process. We attack the model under the white-box settings with `2-PGD, `1-PGD, `∞-DeepFool and `2-DeepFool (Moosavi-Dezfooli et al., 2016) adversaries using Foolbox (Rauber et al., 2017). We applied commonly used attack budget (perturbation for PGD adversaries and overshot for DeepFool adversaries) with 20 iterations for the PGD adversaries and 50 for the DeepFool adversaries. Results are presented in Table 5. As can be seen by the results, our approach gains an improvement of up to 4.73% over the second best method under the various attack types and average improvement of 3.7% over all threat models.
Unforeseen Corruptions. We further demonstrate that our method consistently holds against unforeseen "natural" corruptions, consists of 18 unforeseen diverse corruption types proposed by Hendrycks & Dietterich (2018) on CIFAR-10, which we refer to as CIFAR10-C. The CIFAR10-C benchmark covers noise, blur, weather, and digital categories. As can be shown in Figure 3, our method gains a significant and consistent improvement over all the other methods. Our approach leads to an average improvement of 4.7% with minimum improvement of 3.5% and maximum improvement of 5.9% compared to the second best method over all unforeseen attacks. See Appendix F for the full experiment results.
4.2.2 TRANSFER LEARNING
Recent works (Salman et al., 2020; Utrera et al., 2020) suggested that robust models transfer better on standard downstream classification tasks. In Table 6 we demonstrate the advantage of our method when applied for transfer learning across CIFAR10 and CIFAR100 using the common linear evaluation protocol. see Appendix G for detailed experiment settings.
4.2.3 ABLATION STUDIES
In this part, we conduct ablation studies to further investigate the contribution of the additional domain head component introduced in our method. Experiment configuration are as in 4.2, and robust accuracy is reported on white-box PGD20. We use the CIFAR-10 dataset and train WRN-3410. We remove the domain head from both DIALKL and DIALCE (equivalent to r = 0) and report the natural and robust accuracy. We perform 3 random restarts and omit one standard deviation from the results. Results are presented in Figure 4. Both DIAL variants exhibits stable improvements on both natural accuracy and robust accuracy. DIALCE and DIALKL present an improvement of 1.82% and 0.33% on natural accuracy and 2.5% and 1.87% on robust accuracy, respectively.
4.2.4 VISUALIZING DIAL
To further illustrate our method, we visualize the model outputs using the different methods under natural test data and adversarial test data generated using PGD20 white-box attack with step size 0.003 and = 0.031 on CIFAR-10. Figure 5 shows the embedding received after applying t-SNE (Van der Maaten & Hinton, 2008) with two components on the model output for our method and for TRADES. DIAL seems to preserve strong separation between classes on both natural test data and adversarial test data. Additional illustrations for the other methods are attached in Appendix H.
4.3 BALANCED MEASUREMENT FOR ROBUST AND NATURAL ACCURACY
One of the goals of our method is to better balance between robust and natural accuracy under a given model. For a balanced metric, we adopt the idea of F1-score, which is the harmonic mean between the precision and recall. However, rather than using precision and recall, we measure the F1-score between robustness and natural accuracy, using a measure we call the F1-robust score.
F1-robust = true_robust
true_robust + 12 (false_robust + false_natural) ,
where true_robust are the adversarial examples that were correctly classified, false_robust are the adversarial examples that where miss-classified, and false_natural are the natural examples that were miss-classified. We tested the proposed F1-robust score using PGD20 on CIFAR-10 dataset in whitebox and black-box settings. Results are presented in Table 7 and show that our method achieves the best F1-robust score in both settings, which supports our findings from previous sections.
5 CONCLUSION
In this paper, we investigated the hypothesis that domain invariant representation can be beneficial for robust learning. With this idea in mind, we proposed a new adversarial learning method, called Domain Invariant Adversarial Learning (DIAL) that incorporates Domain Adversarial Neural Network into the adversarial training process. The proposed method is generic and can be combined with any network architecture in a wide range of tasks. Our evaluation process included strong adversaries , unforeseen adversaries , unforeseen corruptions, transfer learning tasks, and ablation studies. Using the extensive empirical analysis, we demonstrate the significant and consistent improvement obtained by DIAL in both robustness and natural accuracy compared to other defence methods on benchmark datasets.
ETHICS STATEMENT
We proposed DIAL to improve models’ robustness against adversarial attacks. We hope that it will help in building more secure models for real-world applications. DIAL is comparable to the stateof-the-art methods we tested in terms of training times and other resources. That said, this work is not without limitations: adversarial training is still a computationally expensive procedure that requires extra computations compared to standard training, with the concomitant environmental costs. Even though our method introduced better standard accuracy, adversarial training still degrades the standard accuracy. Moreover, models are trained to be robust using well known threat models such as the bounded `p norms. However, once a model is deployed, we cannot control the type of attacks it faces from sophisticated adversaries. Thus, the general problem is still very far from being fully solved.
REPRODUCIBILITY STATEMENT
In this paper, great efforts were made to ensure that comparison is fair, and all necessary information for reproducibility is present. Section 4 and Appendix B and C contains all experiment settings for SVHN, CIFAR-10 and CIFAR-100 experiments. Appendix D contains all experiment details and results for MNIST experiments. Appendix G contains experiment settings for the transfer learning experiment. In the supplementary material, we provided the source code to train and evaluate DIAL.
A DOMAIN INVARIANT ADVERSARIAL LEARNING ALGORITHM
Algorithm 1 describes a pseudo-code of our proposed DIALCE variant. As can be seen, a target domain batch is not given in advance as with standard domain-adaptation task. Instead, for each natural batch we generate a target batch using adversarial training. The loss function is composed of natural and adversarial losses with respect to the main task (e.g., classification), and from natural and adversarial domain losses. By maximizing the losses on the domain we aim to learn a feature representation which is invariant to the natural and adversarial domain, and therefore more robust.
Algorithm 1: Domain Invariant Adversarial Learning Input: Source data S = {(xi, yi)}ni=1 and network architecture Gf , Gy, Gd Parameters: Batch size m, perturbation size , pgd attack step size τ , adversarial trade-off λ,
initial reversal ratio r, and step size α Initialize: Y0 and Y1 source and target domain vectors filled with 0 and 1 respectively Output: Robust network G = (Gf , Gy, Gd) parameterized by θ̂ = (θf , θy, θd) respectively
B ADDITIONAL RESULTS ON CIFAR-100 AND SVHN
White-box Black-Box
Defense Model Natural PGD20 PGD100 PGD1000 CW∞ PGD20 PGD100 PGD1000 CW∞ AA
TRADES 90.35 57.10 54.13 54.08 52.19 86.89 86.73 86.57 86.70 49.5 DIALKL (Ours) 90.66 58.91 55.30 55.11 53.67 87.62 87.52 87.41 87.63 51.00 DIALCE (Ours) 92.88 55.26 50.82 50.54 49.66 89.12 89.01 88.74 89.10 46.52
C CIFAR-10 ADDITIONAL EXPERIMENTAL SETUP DETAILS
Additional defence setup. For being consistent with other methods, the natural images are padded with 4-pixel padding with 32-random crop and random horizontal flip. Furthermore, all methods are trained using SGD with momentum 0.9. For DIALKL, we balance the robust loss with λ = 6 and the domains loss with r = 4. For DIALCE, we balance the robust loss with λ = 1 and the domains loss with r = 2. For DIAL-AWP, we used the same learning rate schedule used in Wu et al. (2020), where the initial 0.1 learning rate decays by a factor of 10 after 100 and 150 iterations.
D BENCHMARKING THE STATE-OF-THE-ART ON MNIST
Defence setup. We use the same CNN architecture as used in Zhang et al. (2019b) which consists of four convolutional layers and three fully-connected layers. Sidelong this architecture, we integrate a domain classification layer. To generate the adversarial domain dataset, we use a perturbation size of = 0.3. We apply 40 iterations of inner maximization with perturbation step size of 0.01. Batch size is set to 128 and the model is trained for 100 epochs. Similar to the other methods, the initial learning rate was set to 0.01, and decays by a factor of 10 after 55 iterations, 75 and 90 iterations. All the models in the experiment are trained using SGD with momentum 0.9. For our method, we balance the robust loss with λ = 6 and the domains loss with r = 0.1.
White-box/Black-box robustness. We evaluate all defense models using PGD40, PGD100, PGD1000 and CW∞ (`∞ version of Carlini & Wagner (2017b) attack optimized by PGD-100) with step size 0.01. We constrain all attacks by the same perturbation = 0.3. For our black-box setting, we use a naturally trained surrogate model with natural accuracy of 99.51%. As reported in Table 10, our method achieves improved robustness over the other methods under the different attack types, while preserving the same level of natural accuracy, and even surpassing the naturally trained model. We should note that in general, the improvement margin on MNIST is more moderate compared to CIFAR-10, since MNIST is an easier task than CIFAR-10 and the robustness range is already high to begin with. Additional results are available in Appendix E.
E ADDITIONAL RESULTS ON MNIST AND CIFAR-10
In Table 11 we present additional results using the PGD1000 threat model. We use step size of 0.003 and constrain the attacks by the same perturbation = 0.031. Table 12 presents a comparison of our method combined with AWP to other the variants of AWP that were presented in Wu et al. (2020). In addition, in Table 13 we add the F1-robust scores for different variants of AWP.
F EXTENED RESULTS ON UNFORESEEN CORRUPTIONS
We present full accuracy results against unforeseen corruptions in Tables 14 and 15. We also visualize it in Figure 6.
Table 14: Accuracy (%) against unforeseen corruptions.
Defense Model brightness defocus blur fog glass blur jpeg compression motion blur saturate snow speckle noise
TRADES 82.63 80.04 60.19 78.00 82.81 76.49 81.53 80.68 80.14 MART 80.76 78.62 56.78 76.60 81.26 74.58 80.74 78.22 79.42 AT 83.30 80.42 60.22 77.90 82.73 76.64 82.31 80.37 80.74 ATDA 72.67 69.36 45.52 64.88 73.22 63.47 72.07 68.76 72.27 DIAL (Ours) 87.14 84.84 66.08 81.82 87.07 81.20 86.45 84.18 84.94
Table 15: Accuracy (%) against unforeseen corruptions.
G TRANSFER LEARNING SETTINGS
The models used are the same models from previous experiments. We follow the common procedure of “fixed-feature” setting, where only a linear layer on top of the pre-trained network is trained. We train a linear classifier on CIFAR-100 on top of the pre-trained network which was trained on CIFAR-10. We also train a linear classifier on CIFAR-10 on top of the pre-trained nwork which was trained on CIFAR-100. We train the linear classifier for 100 epochs, and an initial learning rate of 0.1 which is decayed by a factor of 10 at epochs 50 and 75. We used SGD optimizer with momentum 0.9.
H EXTENDED VISUALIZATIONS
In Figure 8, we provide additional visualizations of the different adversarial training methods presented above. We visualize the models outputs using t-SNE with two components on the natural test data and the corresponding adversarial test data generated using PGD20 white-box attack with step size 0.003 and = 0.031 on CIFAR-10.
In Figure 7 we visualize statistical differences between natural and adversarial examples in the feature representation layer. Specifically, we show the differences in mean and std on thirty random feature values from the feature representation layer as we pass through a network the natural test examples and their corresponding adversarial examples. We present the results on same network architecture (WRN-34-10), trained using three different training procedures: naturally trained network, network trained using standard adversarial training (AT) Madry et al. (2017), and DIAL on the CIFAR-10 dataset. When the statistical characteristics of each feature differ from each other, it implies that the features layer is less domain invariant. That is, smaller differences in mean/std yields a better invariance to adversarial examples. One can observe that for DIAL, there is almost no differences between the mean/std of natural examples and their corresponding adversarial examples. Moreover, for the vast majority of the features, DIAL present smaller differences compared to the naturally trained model and the model trained with standard adversarial training. Best viewed in colors. | 1. What is the focus of the paper in terms of feature representation?
2. What is the approach used in the paper to achieve domain invariance and robustness?
3. What are the strengths of the proposed method, particularly in its performance in adversarial examples?
4. What are the weaknesses of the paper, especially in the experimental section?
5. Are there any recent works or state-of-the-art methods that could be included in the comparison to further validate the proposed approach? | Summary Of The Paper
Review | Summary Of The Paper
In this paper, DANN is leveraged to generate domain invariant and robust feature representation. The authors claim that the proposed method outperforms other methods when the target domain is the adversarial examples.
Review
The paper is easy to follow and the idea is straightforward.
The experiment section is not comprehensive. Only a few methods are included in the comparison. More recent SOTA methods are missing. |
ICLR | Title
Domain Invariant Adversarial Learning
Abstract
The phenomenon of adversarial examples illustrates one of the most basic vulnerabilities of deep neural networks. Among the variety of techniques introduced to surmount this inherent weakness, adversarial training has emerged as the most effective strategy to achieve robustness. Typically, this is achieved by balancing robust and natural objectives. In this work, we aim to further optimize the tradeoff between robust and standard accuracy by enforcing a domain-invariant feature representation. We present a new adversarial training method, Domain Invariant Adversarial Learning (DIAL), which learns a feature representation that is both robust and domain invariant. DIAL uses a variant of Domain Adversarial Neural Network (DANN) on the natural domain and its corresponding adversarial domain. In the case where the source domain consists of natural examples and the target domain is the adversarially perturbed examples, our method learns a feature representation constrained not to discriminate between the natural and adversarial examples, and can therefore achieve a more robust representation. Our experiments indicate that our method improves both robustness and standard accuracy, when compared to other state-of-the-art adversarial training methods.
1 INTRODUCTION
Deep learning models have achieved impressive success on a wide range of challenging tasks. However, their performance was shown to be brittle to adversarial examples: small, imperceptible perturbations in the input that drastically alter the classification (Carlini & Wagner, 2017a;b; Goodfellow et al., 2014; Kurakin et al., 2016b; Moosavi-Dezfooli et al., 2016; Szegedy et al., 2013; Tramèr et al., 2017; Dong et al., 2018; Tabacof & Valle, 2016; Xie et al., 2019b; Rony et al., 2019). Designing reliable robust models has gained significant attention in the arms race against adversarial examples. Adversarial training (Szegedy et al., 2013; Goodfellow et al., 2014; Madry et al., 2017; Zhang et al., 2019b) has been suggested as one of the most effective approaches to defend against such examples, and can be described as solving the following min-max optimization problem:
min θ
E(x,y)∼D [
max x′:‖x′−x‖p≤
` (x′, y; θ) ] ,
where x′ is the -bounded perturbation in the `p norm and ` is the loss function. Different unrestricted attacks methods were also suggested, such as adversarial deformation, rotations, translation and more (Brown et al., 2018; Engstrom et al., 2018; Xiao et al., 2018; Alaifari et al., 2018; Gilmer et al., 2018).
The resulting min-max optimization problem can be hard to solve in general. Nevertheless, in the context of -bounded perturbations, the problem is often tractable in practice. The inner maximization is usually approximated by generating adversarial examples using projected gradient descent (PGD) (Kurakin et al., 2016a; Madry et al., 2017). A PGD adversary starts with randomly initialized perturbation and iteratively adjust the perturbation while projecting it back into the -ball:
xt+1 = ΠB (x0) (xt + α · sign(∇xt`(G(xt), y))),
where x0 is the natural example (with or without random noise), and ΠB (x) is the projection operator onto the -ball, G is the network, and α is the perturbation step size. As was shown by Athalye et al. (2018), PGD-based adversarial training was one of the few defenses that were not broken under strong attacks.
That said, the gap between robust and natural accuracy remains large for many tasks such as CIFAR10 (Krizhevsky et al., 2009) and ImageNet (Deng et al., 2009). Generally speaking, Tsipras et al. (2018) suggested that robustness may be at odds with natural accuracy, and usually the trade-off is inherent. Nevertheless, a growing body of work aimed to improve the standard PGD-based adversarial training introduced by Madry et al. (2017) in various ways such as improved adversarial loss functions and regularization techniques (Kannan et al., 2018; Wang et al., 2019b; Zhang et al., 2019b), semi-supervised approaches(Carmon et al., 2019; Uesato et al., 2019; Zhai et al., 2019), adversarial perturbations on model weights (Wu et al., 2020), utilizing out of distribution data (Lee et al., 2021) and many others. See related work for more details.
Our contribution. In this work, we propose a novel approach to regulating the tradeoff between robustness and natural accuracy. In contrast to the aforementioned works, our method enhances adversarial training by enforcing a feature representation that is invariant across the natural and adversarial domains. We incorporate the idea of Domain-Adversarial Neural Networks (DANN) (Ganin & Lempitsky, 2015; Ganin et al., 2016) directly into the adversarial training process. DANN is a representation learning approach for domain adaptation, designed to ensure that predictions are made based on invariant feature representation that cannot discriminate between source and target domains. Intuitively, the tasks of adversarial training and of domain-invariant representation have a similar goal: given a source (natural) domainX and a the target (adversarial) domainX ′, we hope to achieve g(X) ≈ g(X ′), where g a feature representation function (i.e., neural network). Achieving such a dual representation intuitively yields a more general feature representation.
In a comprehensive battery of experiments on MNIST (LeCun et al., 1998), SVHN (Netzer et al., 2011), CIFAR-10 (Krizhevsky et al., 2009) and CIFAR-100 (Krizhevsky et al., 2009) datasets, we demonstrate that by enforcing domain-invariant representation learning using DANN simultaneously with adversarial training, we gain a significant and consistent improvement in both robustness and natural accuracy compared to other state-of-the-art adversarial training methods, under AutoAttack (Croce & Hein, 2020) and various strong PGD (Madry et al., 2017), CW (Carlini & Wagner, 2017b) adversaries in white-box and black-box settings. Additionally, we evaluate our method using unforeseen "natural" corruptions (Hendrycks & Dietterich, 2018), unforeseen adversaries (e.g., `1, `2), transfer learning, and perform ablation studies. Finally, we offer a novel score function for quantifying the robust-natural accuracy tradeoff.
2 RELATED WORK
2.1 DEFENSE METHODS
A variety of theoretically principled (Cohen et al., 2019; Raghunathan et al., 2018a; Sinha et al., 2017; Raghunathan et al., 2018b; Wong et al., 2018; Wong & Kolter, 2018; Gowal et al., 2018)
and empirical defense approaches (Bai et al., 2021) were proposed to enhance robustness since the discovery of adversarial examples. Among the empirical defence techniques we can find adversarial regularization (Kurakin et al., 2016a; Madry et al., 2017; Zhang et al., 2019b; Wang et al., 2019b; Kannan et al., 2018), curriculum-based adversarial training (Cai et al., 2018; Zhang et al., 2020; Wang et al., 2019a), ensemble adversarial training (Tramèr et al., 2017; Pang et al., 2019; Yang et al., 2020), adversarial training with adaptive attack budget (Ding et al., 2018; Cheng et al., 2020), semi-supervised and unsupervised adversarial training (Carmon et al., 2019; Uesato et al., 2019; Zhai et al., 2019), robust self/pre-training (Jiang et al., 2020; Chen et al., 2020), efficient adversarial training (Shafahi et al., 2019; Wong et al., 2020; Andriushchenko & Flammarion, 2020; Zhang et al., 2019a), and many other techniques (Zhang & Wang, 2019; Goldblum et al., 2020; Pang et al., 2020b; Lee et al., 2020). In an additional research direction, researchers suggested to add new dedicated building blocks to the network architecture for improved robustness (Xie & Yuille, 2019; Xie et al., 2019a; Liu et al., 2020). Liu et al. (2020) hypothesised that different adversaries belong to different domains, and suggested gated batch normalization which is trained with multiple perturbation types. Others focused on searching robust architectures against adversarial examples (Guo et al., 2020).
Our work belongs to the the family of adversarial regularization techniques, for which we elaborate on common and best performing methods, and highlight the differences compared to our method.
Madry et al. (2017) proposed a technique, commonly referred to as Adversarial Training (AT), to minimize the cross entropy loss on adversarial examples generated by PGD. Zhang et al. (2019b) suggested to decompose the prediction error for adversarial examples as the sum of the natural error and boundary error, and provided a differentiable upper bounds on both terms. Motivated by this decomposition, they suggested a technique called TRADES that uses the Kullback-Leibler (KL) divergence as a regularization term that will push the decision boundary away from the data. Wang et al. (2019b) suggested that misclassified examples have a significant impact on final robustness, and proposed a technique called MART that differentiate between correctly classified and missclassified examples during training.
Another area of research aims at revealing the connection between the loss weight landscape and adversarial training (Prabhu et al., 2019; Yu et al., 2018; Wu et al., 2020). Specifically, Wu et al. (2020) identified a correlation between the flatness of weight loss landscape and robust generalization gap. They proposed the Adversarial Weight Perturbation (AWP) mechanism that is integrated into existing adversarial training methods. More recently, this approach was formalized from a theoretical standpoint by Tsai et al. (2021). However, this method forms a double-perturbation mechanism that perturbs both inputs and weights, which may incur a significant increase in calculation overhead. Nevertheless, we show that DIAL still improves state-of-the-art results when combined with AWP.
A related approach to ours, called ATDA, was presented by Song et al. (2018). They proposed to add several constrains to the loss function in order to enforce domain adaptation: correlation alignment and maximum mean discrepancy (Borgwardt et al., 2006; Sun & Saenko, 2016). While the objective is similar, using ideas from domain adaptation for learning better representation, we address it in two different ways. Our method fundamentally differs from Song et al. (2018) since we do not enforce domain adaptation by adding specific constrains to the loss function. Instead, we let the network learn the domain invariant representation directly during the optimization process, as suggested by Ganin & Lempitsky (2015); Ganin et al. (2016). Moreover, Song et al. (2018) focused mainly of FGSM. We empirically demonstrate the superiority of our method in Section 4. In a concurrent work, Qian et al. (2021) utilized the idea of exploiting local and global data information, and suggested to generate the adversarial examples by attacking an additional domain classifier.
2.2 ROBUST GENERALIZATION
Several works investigated the sample complexity requires the ensure adversarial generalization compared to the non-adversarial counterpart. Schmidt et al. (2018) has shown that there exists a distribution (mixture of Gaussians) where ensuring robust generalization necessarily requires more data than standard learning. This has been furthered investigated in a distribution-free models via the Rademacher Complexity, VC-dimension (Yin et al., 2019; Attias et al., 2019; Khim & Loh, 2018; Awasthi et al., 2020; Cullina et al., 2018; Montasser et al., 2019; Tsai et al., 2021) and additional settings (Diochnos et al., 2018; Carmon et al., 2019).
3 DOMAIN INVARIANT ADVERSARIAL LEARNING APPROACH
In this section, we introduce our Domain Invariant Adversarial Learning (DIAL) approach for adversarial training. The source domain is the natural dataset, and the target domain is generated using an adversarial attack on the natural domain. We aim to learn a model that has low error on the source (natural) task (e.g., classification) while ensuring that the internal representation cannot discriminate between the natural and adversarial domains. In this way, we enforce additional regularization on the feature representation, which enhances the robustness.
3.1 MODEL ARCHITECTURE AND REGULARIZED LOSS FUNCTION
Let us define the notation for our domain invariant robust architecture and loss. Let Gf (·; θf ) be the feature extractor neural network with parameters θf . Let Gy(·; θy) be the label classifier with parameters θy , and letGd(·; θd) the domain classifier with parameters θd. That is,Gy(Gf (·; θf ); θy) is essentially the standard model (e.g., wide residual network (Zagoruyko & Komodakis, 2016)), while in addition, we have a domain classification layer to enforce a domain invariant on the feature representation. An illustration of the architecture is presented in Figure 1.
Given a training set {(xi, yi)}ni=1, the natural loss is defined as:
Lynat = 1n ∑n i=1 CE(Gy(Gf (xi; θf ); θy), yi).
We consider two basic forms of the robust loss. One is the standard cross-entropy (CE) loss between the predicted probabilities and the actual label, which we refer to later as DIALCE. The second is the Kullback-Leibler (KL) divergence between the adversarial and natural model outputs (logits), as in Zhang et al. (2019b); Wang et al. (2019b), which we refer to as DIALKL.
LCErob = 1n ∑n i=1 CE(Gy(Gf (x ′ i; θf ); θy), yi),
LKLrob = 1n ∑n i=1 KL(Gf (x ′ i; θf ) ‖ Gf (xi; θf )),
where {(x′i, yi)}ni=1 are the generated corresponding adversarial examples. Next, we define source domain label di as 0 (for natural examples) and target domain label d ′
i as 1 (for adversarial examples). Then, the natural and adversarial domain losses are defined as:
Ldnat = 1n ∑n i=1 CE(Gd(Gf (xi; θf ); θd), di),
Ldadv = 1n ∑n i=1 CE(Gd(Gf (x ′ i; θf ); θd), d ′ i).
We can now define the full domain invariant robust loss:
DIALCE = Lynat + λLCErob − r(Ldnat + Ldadv),
DIALKL = Lynat + λLKLrob − r(Ldnat + Ldadv).
The goal is to minimize the loss on the natural and adversarial classification while maximizing the loss for the domains. The reversal-ratio hyper-parameter r is inserted into the network layers as a gradient reversal layer (Ganin & Lempitsky, 2015; Ganin et al., 2016) that leaves the input unchanged during forward propagation and reverses the gradient by multiplying it with a negative scalar during the back-propagation. The reversal-ratio parameter is initialized to a small value and is gradually increased to r, as the main objective converges. This enforces a domain-invariant representation as the training progress: a larger value enforces a higher fidelity to the domain. A comprehensive algorithm description can be found in Appendix A.
Modularity and semi-supervised extensions. We note that the domain classifier is a modular component that can be integrated into existing models for further improvements. Moreover, since the domain classifier does not require the class labels, additional unlabeled data can be leveraged in future work for improved results.
3.2 THE BENEFITS OF INVARIANT REPRESENTATION TO ADVERSARIAL EXAMPLES
The motivation behind the proposed method is to enforce an invariant feature representation to adversarial perturbations. Given a natural example x and its adversarial counterpart x′, if the domain classifier manages to distinguish between them, this means that the perturbation has induced a significant difference in the feature representation. We impose an additional loss on the natural and adversarial domains in order to discourage this behavior.
We demonstrate that the feature representation layer does not discriminate between natural and adversarial examples, namely Gf (x; θf ) ≈ Gf (x′; θf ). Figure 2 presents the scaled mean and standard deviation (std) of the absolute differences between the natural examples from test and their corresponding adversarial examples on different features from the feature representation layer. Smaller differences in the mean and std imply a higher domain invariance — and indeed, DIAL achieves near-zero differences almost across the board. Moreover, DIAL’s feature-level invariance almost consistently outperforms the naturally trained model and the model trained using standard adversarial training techniques (Madry et al., 2017). We provide additional features visualizations in Appendix H.
4 EXPERIMENTS
In this section we conduct comprehensive experiments to emphasise the effectiveness of DIAL, including evaluations under white-box and black-box settings, robustness to unforeseen adversaries, robustness to unforeseen corruptions, transfer learning, and ablation studies. Finally, we present a new measurement to test the balance between robustness and natural accuracy, which we named F1-robust score.
4.1 A CASE STUDY ON SVHN AND CIFAR-100
In the first part of our analysis, we conduct a case study experiment on two benchmark datasets: SVHN (Netzer et al., 2011) and CIFAR-100 Krizhevsky et al. (2009). We follow common experiment settings as in Rice et al. (2020); Wu et al. (2020). We used the PreAct ResNet-18 (He et al., 2016) architecture on which we integrate a domain classification layer. The adversarial training is done using 10-step PGD adversary with perturbation size of = 0.031 and a step size of 0.003 for SVHN and 0.007 for CIFAR-100. The batch size is 128, weight decay is 7e−4 and the model is trained for 100 epochs. For SVHN, the initial learinnig rate is set to 0.01 and decays by a factor of 10 after 55, 75 and 90 iteration. For CIFAR-100, the initial learning rate is set to 0.1 and decays by a factor of 10 after 75 and 90 iterations. Results are averaged over 3 restarts while omitting one standard deviation. As can be seen by the results in Tables 1 and 2, DIAL presents consistent improvement in robustness (e.g., 5.75% improved robustness on SVHN against AA) compared to the standard AT while also improving the natural accuracy. More results are presented in Appendix B.
4.2 BENCHMARKING THE STATE-OF-THE-ART ROBUSTNESS
In this part, we evaluate the performance of DIAL compared to other state-of-the-art methods on CIFAR-10. We follow the same experiment setups as in Madry et al. (2017); Wang et al. (2019b);
Zhang et al. (2019b). When experiment settings are not identical between tested methods, we choose the most commonly used settings, and apply it to all experiments. This way, we keep the comparison as fair as possible and avoid reporting changes in results which are caused by inconsistent experiment settings (Pang et al., 2020a). To show that our results are not caused because of what is referred to as obfuscated gradients (Athalye et al., 2018), we evaluate our method with same setup as in our defense model, under strong attacks (e.g., PGD1000) in both white-box, black-box settings, AutoAttack (Croce & Hein, 2020), unforeseen "natural" corruptions (Hendrycks & Dietterich, 2018), and unforeseen adversaries. To make sure that the reported improvements are not caused by adversarial overfitting (Rice et al., 2020), we report best robust results for each method on average of 3 restarts, while omitting one standard deviation. Additional results for CIFAR-10 as well as comprehansive evaluation on MNIST can be found in Appendix D and E.
CIFAR-10 setup. We use the wide residual network (WRN-34-10) (Zagoruyko & Komodakis, 2016) architecture. Sidelong this architecture, we integrate a domain classification layer. To generate the adversarial domain dataset, we use a perturbation size of = 0.031. We apply 10 of inner maximization iterations with perturbation step size of 0.007. Batch size is set to 128, weight decay is set to 7e−4, and the model is trained for 100 epochs. Similar to the other methods, the initial learning rate was set to 0.1, and decays by a factor of 10 at iteration 75 and 90. We also introduce a version of our method that incorporates the AWP double-perturbation mechanism, named DIAL-AWP. For black-box attacks, we used two types of surrogate models (1) surrogate model trained independently without adversarial training, with natural accuracy of 95.61% and (2) surrogate model trained using one of the adversarial training methods. Additional training details can be found in Appendix C.
White-box/Black-box robustness. As reported in Table 3, our method achieves better robustness over the other state-of-the-art methods with respect to the different attacks. Specifically, in white-box settings, we see that our method improves robustness over Madry et al. (2017) by more than 2%, and roughly 2% over TRADES using the common PGD20 attack while keeping higher natural accuracy. We also observe better natural accuracy of 1.65% over MART while also achieving better robustness over all attacks. Moreover, our method presents significant improvement of up to 15% compared to the the domain invariant method suggested by Song et al. (2018) (ATDA). When incorporating AWP, our method improves the TRADES-AWP variant by almost 2% Additional results are available in Appendix E. When tested on black-box settings, DIALCE presents a significant improvement of more than 4.4% over the second-best performing method, and up to 13%. In Table 4, we also present the black-box results when the source model is taken from one of the adversarially trained models. In addition to the improvement in black-box robustness, DIALCE also manages to achieve better clean accuracy of more than 4.5% over the second-best performing method. Moreover, based on the
auto-attack leader-board 1, our method achieves the 1st place among models without additional data using the WRN-34-10 architecture.
4.2.1 ROBUSTNESS TO UNFORESEEN ATTACKS AND CORRUPTIONS
Unforeseen Adversaries. To further demonstrate the effectiveness of our approach, we test our method against various adversaries that were not used during the training process. We attack the model under the white-box settings with `2-PGD, `1-PGD, `∞-DeepFool and `2-DeepFool (Moosavi-Dezfooli et al., 2016) adversaries using Foolbox (Rauber et al., 2017). We applied commonly used attack budget (perturbation for PGD adversaries and overshot for DeepFool adversaries) with 20 iterations for the PGD adversaries and 50 for the DeepFool adversaries. Results are presented in Table 5. As can be seen by the results, our approach gains an improvement of up to 4.73% over the second best method under the various attack types and average improvement of 3.7% over all threat models.
Unforeseen Corruptions. We further demonstrate that our method consistently holds against unforeseen "natural" corruptions, consists of 18 unforeseen diverse corruption types proposed by Hendrycks & Dietterich (2018) on CIFAR-10, which we refer to as CIFAR10-C. The CIFAR10-C benchmark covers noise, blur, weather, and digital categories. As can be shown in Figure 3, our method gains a significant and consistent improvement over all the other methods. Our approach leads to an average improvement of 4.7% with minimum improvement of 3.5% and maximum improvement of 5.9% compared to the second best method over all unforeseen attacks. See Appendix F for the full experiment results.
4.2.2 TRANSFER LEARNING
Recent works (Salman et al., 2020; Utrera et al., 2020) suggested that robust models transfer better on standard downstream classification tasks. In Table 6 we demonstrate the advantage of our method when applied for transfer learning across CIFAR10 and CIFAR100 using the common linear evaluation protocol. see Appendix G for detailed experiment settings.
4.2.3 ABLATION STUDIES
In this part, we conduct ablation studies to further investigate the contribution of the additional domain head component introduced in our method. Experiment configuration are as in 4.2, and robust accuracy is reported on white-box PGD20. We use the CIFAR-10 dataset and train WRN-3410. We remove the domain head from both DIALKL and DIALCE (equivalent to r = 0) and report the natural and robust accuracy. We perform 3 random restarts and omit one standard deviation from the results. Results are presented in Figure 4. Both DIAL variants exhibits stable improvements on both natural accuracy and robust accuracy. DIALCE and DIALKL present an improvement of 1.82% and 0.33% on natural accuracy and 2.5% and 1.87% on robust accuracy, respectively.
4.2.4 VISUALIZING DIAL
To further illustrate our method, we visualize the model outputs using the different methods under natural test data and adversarial test data generated using PGD20 white-box attack with step size 0.003 and = 0.031 on CIFAR-10. Figure 5 shows the embedding received after applying t-SNE (Van der Maaten & Hinton, 2008) with two components on the model output for our method and for TRADES. DIAL seems to preserve strong separation between classes on both natural test data and adversarial test data. Additional illustrations for the other methods are attached in Appendix H.
4.3 BALANCED MEASUREMENT FOR ROBUST AND NATURAL ACCURACY
One of the goals of our method is to better balance between robust and natural accuracy under a given model. For a balanced metric, we adopt the idea of F1-score, which is the harmonic mean between the precision and recall. However, rather than using precision and recall, we measure the F1-score between robustness and natural accuracy, using a measure we call the F1-robust score.
F1-robust = true_robust
true_robust + 12 (false_robust + false_natural) ,
where true_robust are the adversarial examples that were correctly classified, false_robust are the adversarial examples that where miss-classified, and false_natural are the natural examples that were miss-classified. We tested the proposed F1-robust score using PGD20 on CIFAR-10 dataset in whitebox and black-box settings. Results are presented in Table 7 and show that our method achieves the best F1-robust score in both settings, which supports our findings from previous sections.
5 CONCLUSION
In this paper, we investigated the hypothesis that domain invariant representation can be beneficial for robust learning. With this idea in mind, we proposed a new adversarial learning method, called Domain Invariant Adversarial Learning (DIAL) that incorporates Domain Adversarial Neural Network into the adversarial training process. The proposed method is generic and can be combined with any network architecture in a wide range of tasks. Our evaluation process included strong adversaries , unforeseen adversaries , unforeseen corruptions, transfer learning tasks, and ablation studies. Using the extensive empirical analysis, we demonstrate the significant and consistent improvement obtained by DIAL in both robustness and natural accuracy compared to other defence methods on benchmark datasets.
ETHICS STATEMENT
We proposed DIAL to improve models’ robustness against adversarial attacks. We hope that it will help in building more secure models for real-world applications. DIAL is comparable to the stateof-the-art methods we tested in terms of training times and other resources. That said, this work is not without limitations: adversarial training is still a computationally expensive procedure that requires extra computations compared to standard training, with the concomitant environmental costs. Even though our method introduced better standard accuracy, adversarial training still degrades the standard accuracy. Moreover, models are trained to be robust using well known threat models such as the bounded `p norms. However, once a model is deployed, we cannot control the type of attacks it faces from sophisticated adversaries. Thus, the general problem is still very far from being fully solved.
REPRODUCIBILITY STATEMENT
In this paper, great efforts were made to ensure that comparison is fair, and all necessary information for reproducibility is present. Section 4 and Appendix B and C contains all experiment settings for SVHN, CIFAR-10 and CIFAR-100 experiments. Appendix D contains all experiment details and results for MNIST experiments. Appendix G contains experiment settings for the transfer learning experiment. In the supplementary material, we provided the source code to train and evaluate DIAL.
A DOMAIN INVARIANT ADVERSARIAL LEARNING ALGORITHM
Algorithm 1 describes a pseudo-code of our proposed DIALCE variant. As can be seen, a target domain batch is not given in advance as with standard domain-adaptation task. Instead, for each natural batch we generate a target batch using adversarial training. The loss function is composed of natural and adversarial losses with respect to the main task (e.g., classification), and from natural and adversarial domain losses. By maximizing the losses on the domain we aim to learn a feature representation which is invariant to the natural and adversarial domain, and therefore more robust.
Algorithm 1: Domain Invariant Adversarial Learning Input: Source data S = {(xi, yi)}ni=1 and network architecture Gf , Gy, Gd Parameters: Batch size m, perturbation size , pgd attack step size τ , adversarial trade-off λ,
initial reversal ratio r, and step size α Initialize: Y0 and Y1 source and target domain vectors filled with 0 and 1 respectively Output: Robust network G = (Gf , Gy, Gd) parameterized by θ̂ = (θf , θy, θd) respectively
B ADDITIONAL RESULTS ON CIFAR-100 AND SVHN
White-box Black-Box
Defense Model Natural PGD20 PGD100 PGD1000 CW∞ PGD20 PGD100 PGD1000 CW∞ AA
TRADES 90.35 57.10 54.13 54.08 52.19 86.89 86.73 86.57 86.70 49.5 DIALKL (Ours) 90.66 58.91 55.30 55.11 53.67 87.62 87.52 87.41 87.63 51.00 DIALCE (Ours) 92.88 55.26 50.82 50.54 49.66 89.12 89.01 88.74 89.10 46.52
C CIFAR-10 ADDITIONAL EXPERIMENTAL SETUP DETAILS
Additional defence setup. For being consistent with other methods, the natural images are padded with 4-pixel padding with 32-random crop and random horizontal flip. Furthermore, all methods are trained using SGD with momentum 0.9. For DIALKL, we balance the robust loss with λ = 6 and the domains loss with r = 4. For DIALCE, we balance the robust loss with λ = 1 and the domains loss with r = 2. For DIAL-AWP, we used the same learning rate schedule used in Wu et al. (2020), where the initial 0.1 learning rate decays by a factor of 10 after 100 and 150 iterations.
D BENCHMARKING THE STATE-OF-THE-ART ON MNIST
Defence setup. We use the same CNN architecture as used in Zhang et al. (2019b) which consists of four convolutional layers and three fully-connected layers. Sidelong this architecture, we integrate a domain classification layer. To generate the adversarial domain dataset, we use a perturbation size of = 0.3. We apply 40 iterations of inner maximization with perturbation step size of 0.01. Batch size is set to 128 and the model is trained for 100 epochs. Similar to the other methods, the initial learning rate was set to 0.01, and decays by a factor of 10 after 55 iterations, 75 and 90 iterations. All the models in the experiment are trained using SGD with momentum 0.9. For our method, we balance the robust loss with λ = 6 and the domains loss with r = 0.1.
White-box/Black-box robustness. We evaluate all defense models using PGD40, PGD100, PGD1000 and CW∞ (`∞ version of Carlini & Wagner (2017b) attack optimized by PGD-100) with step size 0.01. We constrain all attacks by the same perturbation = 0.3. For our black-box setting, we use a naturally trained surrogate model with natural accuracy of 99.51%. As reported in Table 10, our method achieves improved robustness over the other methods under the different attack types, while preserving the same level of natural accuracy, and even surpassing the naturally trained model. We should note that in general, the improvement margin on MNIST is more moderate compared to CIFAR-10, since MNIST is an easier task than CIFAR-10 and the robustness range is already high to begin with. Additional results are available in Appendix E.
E ADDITIONAL RESULTS ON MNIST AND CIFAR-10
In Table 11 we present additional results using the PGD1000 threat model. We use step size of 0.003 and constrain the attacks by the same perturbation = 0.031. Table 12 presents a comparison of our method combined with AWP to other the variants of AWP that were presented in Wu et al. (2020). In addition, in Table 13 we add the F1-robust scores for different variants of AWP.
F EXTENED RESULTS ON UNFORESEEN CORRUPTIONS
We present full accuracy results against unforeseen corruptions in Tables 14 and 15. We also visualize it in Figure 6.
Table 14: Accuracy (%) against unforeseen corruptions.
Defense Model brightness defocus blur fog glass blur jpeg compression motion blur saturate snow speckle noise
TRADES 82.63 80.04 60.19 78.00 82.81 76.49 81.53 80.68 80.14 MART 80.76 78.62 56.78 76.60 81.26 74.58 80.74 78.22 79.42 AT 83.30 80.42 60.22 77.90 82.73 76.64 82.31 80.37 80.74 ATDA 72.67 69.36 45.52 64.88 73.22 63.47 72.07 68.76 72.27 DIAL (Ours) 87.14 84.84 66.08 81.82 87.07 81.20 86.45 84.18 84.94
Table 15: Accuracy (%) against unforeseen corruptions.
G TRANSFER LEARNING SETTINGS
The models used are the same models from previous experiments. We follow the common procedure of “fixed-feature” setting, where only a linear layer on top of the pre-trained network is trained. We train a linear classifier on CIFAR-100 on top of the pre-trained network which was trained on CIFAR-10. We also train a linear classifier on CIFAR-10 on top of the pre-trained nwork which was trained on CIFAR-100. We train the linear classifier for 100 epochs, and an initial learning rate of 0.1 which is decayed by a factor of 10 at epochs 50 and 75. We used SGD optimizer with momentum 0.9.
H EXTENDED VISUALIZATIONS
In Figure 8, we provide additional visualizations of the different adversarial training methods presented above. We visualize the models outputs using t-SNE with two components on the natural test data and the corresponding adversarial test data generated using PGD20 white-box attack with step size 0.003 and = 0.031 on CIFAR-10.
In Figure 7 we visualize statistical differences between natural and adversarial examples in the feature representation layer. Specifically, we show the differences in mean and std on thirty random feature values from the feature representation layer as we pass through a network the natural test examples and their corresponding adversarial examples. We present the results on same network architecture (WRN-34-10), trained using three different training procedures: naturally trained network, network trained using standard adversarial training (AT) Madry et al. (2017), and DIAL on the CIFAR-10 dataset. When the statistical characteristics of each feature differ from each other, it implies that the features layer is less domain invariant. That is, smaller differences in mean/std yields a better invariance to adversarial examples. One can observe that for DIAL, there is almost no differences between the mean/std of natural examples and their corresponding adversarial examples. Moreover, for the vast majority of the features, DIAL present smaller differences compared to the naturally trained model and the model trained with standard adversarial training. Best viewed in colors. | 1. What is the focus of the paper regarding domain adaptation?
2. What are the strengths of the proposed method, particularly its intuitive motivation and comprehensive experiments?
3. What are the weaknesses of the paper, especially concerning its novelty compared to prior works in domain adaptation?
4. Do you have any concerns or questions about the paper's content or approach? | Summary Of The Paper
Review | Summary Of The Paper
This paper proposes DIAL to learn domain-invariant representations for clean and adversarial examples to improve model robustness and clean accuracy. The main idea is to treat the problem as a domain adaptation problem by considering the data shift between adversarial and clean distributions, and then use the generative adversarial network (GAN) principle to tackle this data shift.
Review
Pros:
(1) The paper is clearly written and easy to follow.
(2) The motivation behind is very intuitive.
(3) The paper conducts extensive experiments including multiple-
ℓ
p
-norm adversarial perturbations and unseen corruptions.
Cons:
(1) My biggest concern is the novelty of this paper. Though showing promising performance, the idea of learning a feature extractor to minimize the distance between adversarial and clean distributions/domains has been widely studied and adopted before in the domain adaptation (DA) literature. In this paper, the author just simply introduced several DA loss terms and used the GAN framework to learn a more robust model. The experimental results are persuasive, however, the approach is too simple and not novel enough.
(2) Some minor problems. I cannot find your paper in the autoattack leaderboard as you mentioned at the first line in Page 7. |
ICLR | Title
Domain Invariant Adversarial Learning
Abstract
The phenomenon of adversarial examples illustrates one of the most basic vulnerabilities of deep neural networks. Among the variety of techniques introduced to surmount this inherent weakness, adversarial training has emerged as the most effective strategy to achieve robustness. Typically, this is achieved by balancing robust and natural objectives. In this work, we aim to further optimize the tradeoff between robust and standard accuracy by enforcing a domain-invariant feature representation. We present a new adversarial training method, Domain Invariant Adversarial Learning (DIAL), which learns a feature representation that is both robust and domain invariant. DIAL uses a variant of Domain Adversarial Neural Network (DANN) on the natural domain and its corresponding adversarial domain. In the case where the source domain consists of natural examples and the target domain is the adversarially perturbed examples, our method learns a feature representation constrained not to discriminate between the natural and adversarial examples, and can therefore achieve a more robust representation. Our experiments indicate that our method improves both robustness and standard accuracy, when compared to other state-of-the-art adversarial training methods.
1 INTRODUCTION
Deep learning models have achieved impressive success on a wide range of challenging tasks. However, their performance was shown to be brittle to adversarial examples: small, imperceptible perturbations in the input that drastically alter the classification (Carlini & Wagner, 2017a;b; Goodfellow et al., 2014; Kurakin et al., 2016b; Moosavi-Dezfooli et al., 2016; Szegedy et al., 2013; Tramèr et al., 2017; Dong et al., 2018; Tabacof & Valle, 2016; Xie et al., 2019b; Rony et al., 2019). Designing reliable robust models has gained significant attention in the arms race against adversarial examples. Adversarial training (Szegedy et al., 2013; Goodfellow et al., 2014; Madry et al., 2017; Zhang et al., 2019b) has been suggested as one of the most effective approaches to defend against such examples, and can be described as solving the following min-max optimization problem:
min θ
E(x,y)∼D [
max x′:‖x′−x‖p≤
` (x′, y; θ) ] ,
where x′ is the -bounded perturbation in the `p norm and ` is the loss function. Different unrestricted attacks methods were also suggested, such as adversarial deformation, rotations, translation and more (Brown et al., 2018; Engstrom et al., 2018; Xiao et al., 2018; Alaifari et al., 2018; Gilmer et al., 2018).
The resulting min-max optimization problem can be hard to solve in general. Nevertheless, in the context of -bounded perturbations, the problem is often tractable in practice. The inner maximization is usually approximated by generating adversarial examples using projected gradient descent (PGD) (Kurakin et al., 2016a; Madry et al., 2017). A PGD adversary starts with randomly initialized perturbation and iteratively adjust the perturbation while projecting it back into the -ball:
xt+1 = ΠB (x0) (xt + α · sign(∇xt`(G(xt), y))),
where x0 is the natural example (with or without random noise), and ΠB (x) is the projection operator onto the -ball, G is the network, and α is the perturbation step size. As was shown by Athalye et al. (2018), PGD-based adversarial training was one of the few defenses that were not broken under strong attacks.
That said, the gap between robust and natural accuracy remains large for many tasks such as CIFAR10 (Krizhevsky et al., 2009) and ImageNet (Deng et al., 2009). Generally speaking, Tsipras et al. (2018) suggested that robustness may be at odds with natural accuracy, and usually the trade-off is inherent. Nevertheless, a growing body of work aimed to improve the standard PGD-based adversarial training introduced by Madry et al. (2017) in various ways such as improved adversarial loss functions and regularization techniques (Kannan et al., 2018; Wang et al., 2019b; Zhang et al., 2019b), semi-supervised approaches(Carmon et al., 2019; Uesato et al., 2019; Zhai et al., 2019), adversarial perturbations on model weights (Wu et al., 2020), utilizing out of distribution data (Lee et al., 2021) and many others. See related work for more details.
Our contribution. In this work, we propose a novel approach to regulating the tradeoff between robustness and natural accuracy. In contrast to the aforementioned works, our method enhances adversarial training by enforcing a feature representation that is invariant across the natural and adversarial domains. We incorporate the idea of Domain-Adversarial Neural Networks (DANN) (Ganin & Lempitsky, 2015; Ganin et al., 2016) directly into the adversarial training process. DANN is a representation learning approach for domain adaptation, designed to ensure that predictions are made based on invariant feature representation that cannot discriminate between source and target domains. Intuitively, the tasks of adversarial training and of domain-invariant representation have a similar goal: given a source (natural) domainX and a the target (adversarial) domainX ′, we hope to achieve g(X) ≈ g(X ′), where g a feature representation function (i.e., neural network). Achieving such a dual representation intuitively yields a more general feature representation.
In a comprehensive battery of experiments on MNIST (LeCun et al., 1998), SVHN (Netzer et al., 2011), CIFAR-10 (Krizhevsky et al., 2009) and CIFAR-100 (Krizhevsky et al., 2009) datasets, we demonstrate that by enforcing domain-invariant representation learning using DANN simultaneously with adversarial training, we gain a significant and consistent improvement in both robustness and natural accuracy compared to other state-of-the-art adversarial training methods, under AutoAttack (Croce & Hein, 2020) and various strong PGD (Madry et al., 2017), CW (Carlini & Wagner, 2017b) adversaries in white-box and black-box settings. Additionally, we evaluate our method using unforeseen "natural" corruptions (Hendrycks & Dietterich, 2018), unforeseen adversaries (e.g., `1, `2), transfer learning, and perform ablation studies. Finally, we offer a novel score function for quantifying the robust-natural accuracy tradeoff.
2 RELATED WORK
2.1 DEFENSE METHODS
A variety of theoretically principled (Cohen et al., 2019; Raghunathan et al., 2018a; Sinha et al., 2017; Raghunathan et al., 2018b; Wong et al., 2018; Wong & Kolter, 2018; Gowal et al., 2018)
and empirical defense approaches (Bai et al., 2021) were proposed to enhance robustness since the discovery of adversarial examples. Among the empirical defence techniques we can find adversarial regularization (Kurakin et al., 2016a; Madry et al., 2017; Zhang et al., 2019b; Wang et al., 2019b; Kannan et al., 2018), curriculum-based adversarial training (Cai et al., 2018; Zhang et al., 2020; Wang et al., 2019a), ensemble adversarial training (Tramèr et al., 2017; Pang et al., 2019; Yang et al., 2020), adversarial training with adaptive attack budget (Ding et al., 2018; Cheng et al., 2020), semi-supervised and unsupervised adversarial training (Carmon et al., 2019; Uesato et al., 2019; Zhai et al., 2019), robust self/pre-training (Jiang et al., 2020; Chen et al., 2020), efficient adversarial training (Shafahi et al., 2019; Wong et al., 2020; Andriushchenko & Flammarion, 2020; Zhang et al., 2019a), and many other techniques (Zhang & Wang, 2019; Goldblum et al., 2020; Pang et al., 2020b; Lee et al., 2020). In an additional research direction, researchers suggested to add new dedicated building blocks to the network architecture for improved robustness (Xie & Yuille, 2019; Xie et al., 2019a; Liu et al., 2020). Liu et al. (2020) hypothesised that different adversaries belong to different domains, and suggested gated batch normalization which is trained with multiple perturbation types. Others focused on searching robust architectures against adversarial examples (Guo et al., 2020).
Our work belongs to the the family of adversarial regularization techniques, for which we elaborate on common and best performing methods, and highlight the differences compared to our method.
Madry et al. (2017) proposed a technique, commonly referred to as Adversarial Training (AT), to minimize the cross entropy loss on adversarial examples generated by PGD. Zhang et al. (2019b) suggested to decompose the prediction error for adversarial examples as the sum of the natural error and boundary error, and provided a differentiable upper bounds on both terms. Motivated by this decomposition, they suggested a technique called TRADES that uses the Kullback-Leibler (KL) divergence as a regularization term that will push the decision boundary away from the data. Wang et al. (2019b) suggested that misclassified examples have a significant impact on final robustness, and proposed a technique called MART that differentiate between correctly classified and missclassified examples during training.
Another area of research aims at revealing the connection between the loss weight landscape and adversarial training (Prabhu et al., 2019; Yu et al., 2018; Wu et al., 2020). Specifically, Wu et al. (2020) identified a correlation between the flatness of weight loss landscape and robust generalization gap. They proposed the Adversarial Weight Perturbation (AWP) mechanism that is integrated into existing adversarial training methods. More recently, this approach was formalized from a theoretical standpoint by Tsai et al. (2021). However, this method forms a double-perturbation mechanism that perturbs both inputs and weights, which may incur a significant increase in calculation overhead. Nevertheless, we show that DIAL still improves state-of-the-art results when combined with AWP.
A related approach to ours, called ATDA, was presented by Song et al. (2018). They proposed to add several constrains to the loss function in order to enforce domain adaptation: correlation alignment and maximum mean discrepancy (Borgwardt et al., 2006; Sun & Saenko, 2016). While the objective is similar, using ideas from domain adaptation for learning better representation, we address it in two different ways. Our method fundamentally differs from Song et al. (2018) since we do not enforce domain adaptation by adding specific constrains to the loss function. Instead, we let the network learn the domain invariant representation directly during the optimization process, as suggested by Ganin & Lempitsky (2015); Ganin et al. (2016). Moreover, Song et al. (2018) focused mainly of FGSM. We empirically demonstrate the superiority of our method in Section 4. In a concurrent work, Qian et al. (2021) utilized the idea of exploiting local and global data information, and suggested to generate the adversarial examples by attacking an additional domain classifier.
2.2 ROBUST GENERALIZATION
Several works investigated the sample complexity requires the ensure adversarial generalization compared to the non-adversarial counterpart. Schmidt et al. (2018) has shown that there exists a distribution (mixture of Gaussians) where ensuring robust generalization necessarily requires more data than standard learning. This has been furthered investigated in a distribution-free models via the Rademacher Complexity, VC-dimension (Yin et al., 2019; Attias et al., 2019; Khim & Loh, 2018; Awasthi et al., 2020; Cullina et al., 2018; Montasser et al., 2019; Tsai et al., 2021) and additional settings (Diochnos et al., 2018; Carmon et al., 2019).
3 DOMAIN INVARIANT ADVERSARIAL LEARNING APPROACH
In this section, we introduce our Domain Invariant Adversarial Learning (DIAL) approach for adversarial training. The source domain is the natural dataset, and the target domain is generated using an adversarial attack on the natural domain. We aim to learn a model that has low error on the source (natural) task (e.g., classification) while ensuring that the internal representation cannot discriminate between the natural and adversarial domains. In this way, we enforce additional regularization on the feature representation, which enhances the robustness.
3.1 MODEL ARCHITECTURE AND REGULARIZED LOSS FUNCTION
Let us define the notation for our domain invariant robust architecture and loss. Let Gf (·; θf ) be the feature extractor neural network with parameters θf . Let Gy(·; θy) be the label classifier with parameters θy , and letGd(·; θd) the domain classifier with parameters θd. That is,Gy(Gf (·; θf ); θy) is essentially the standard model (e.g., wide residual network (Zagoruyko & Komodakis, 2016)), while in addition, we have a domain classification layer to enforce a domain invariant on the feature representation. An illustration of the architecture is presented in Figure 1.
Given a training set {(xi, yi)}ni=1, the natural loss is defined as:
Lynat = 1n ∑n i=1 CE(Gy(Gf (xi; θf ); θy), yi).
We consider two basic forms of the robust loss. One is the standard cross-entropy (CE) loss between the predicted probabilities and the actual label, which we refer to later as DIALCE. The second is the Kullback-Leibler (KL) divergence between the adversarial and natural model outputs (logits), as in Zhang et al. (2019b); Wang et al. (2019b), which we refer to as DIALKL.
LCErob = 1n ∑n i=1 CE(Gy(Gf (x ′ i; θf ); θy), yi),
LKLrob = 1n ∑n i=1 KL(Gf (x ′ i; θf ) ‖ Gf (xi; θf )),
where {(x′i, yi)}ni=1 are the generated corresponding adversarial examples. Next, we define source domain label di as 0 (for natural examples) and target domain label d ′
i as 1 (for adversarial examples). Then, the natural and adversarial domain losses are defined as:
Ldnat = 1n ∑n i=1 CE(Gd(Gf (xi; θf ); θd), di),
Ldadv = 1n ∑n i=1 CE(Gd(Gf (x ′ i; θf ); θd), d ′ i).
We can now define the full domain invariant robust loss:
DIALCE = Lynat + λLCErob − r(Ldnat + Ldadv),
DIALKL = Lynat + λLKLrob − r(Ldnat + Ldadv).
The goal is to minimize the loss on the natural and adversarial classification while maximizing the loss for the domains. The reversal-ratio hyper-parameter r is inserted into the network layers as a gradient reversal layer (Ganin & Lempitsky, 2015; Ganin et al., 2016) that leaves the input unchanged during forward propagation and reverses the gradient by multiplying it with a negative scalar during the back-propagation. The reversal-ratio parameter is initialized to a small value and is gradually increased to r, as the main objective converges. This enforces a domain-invariant representation as the training progress: a larger value enforces a higher fidelity to the domain. A comprehensive algorithm description can be found in Appendix A.
Modularity and semi-supervised extensions. We note that the domain classifier is a modular component that can be integrated into existing models for further improvements. Moreover, since the domain classifier does not require the class labels, additional unlabeled data can be leveraged in future work for improved results.
3.2 THE BENEFITS OF INVARIANT REPRESENTATION TO ADVERSARIAL EXAMPLES
The motivation behind the proposed method is to enforce an invariant feature representation to adversarial perturbations. Given a natural example x and its adversarial counterpart x′, if the domain classifier manages to distinguish between them, this means that the perturbation has induced a significant difference in the feature representation. We impose an additional loss on the natural and adversarial domains in order to discourage this behavior.
We demonstrate that the feature representation layer does not discriminate between natural and adversarial examples, namely Gf (x; θf ) ≈ Gf (x′; θf ). Figure 2 presents the scaled mean and standard deviation (std) of the absolute differences between the natural examples from test and their corresponding adversarial examples on different features from the feature representation layer. Smaller differences in the mean and std imply a higher domain invariance — and indeed, DIAL achieves near-zero differences almost across the board. Moreover, DIAL’s feature-level invariance almost consistently outperforms the naturally trained model and the model trained using standard adversarial training techniques (Madry et al., 2017). We provide additional features visualizations in Appendix H.
4 EXPERIMENTS
In this section we conduct comprehensive experiments to emphasise the effectiveness of DIAL, including evaluations under white-box and black-box settings, robustness to unforeseen adversaries, robustness to unforeseen corruptions, transfer learning, and ablation studies. Finally, we present a new measurement to test the balance between robustness and natural accuracy, which we named F1-robust score.
4.1 A CASE STUDY ON SVHN AND CIFAR-100
In the first part of our analysis, we conduct a case study experiment on two benchmark datasets: SVHN (Netzer et al., 2011) and CIFAR-100 Krizhevsky et al. (2009). We follow common experiment settings as in Rice et al. (2020); Wu et al. (2020). We used the PreAct ResNet-18 (He et al., 2016) architecture on which we integrate a domain classification layer. The adversarial training is done using 10-step PGD adversary with perturbation size of = 0.031 and a step size of 0.003 for SVHN and 0.007 for CIFAR-100. The batch size is 128, weight decay is 7e−4 and the model is trained for 100 epochs. For SVHN, the initial learinnig rate is set to 0.01 and decays by a factor of 10 after 55, 75 and 90 iteration. For CIFAR-100, the initial learning rate is set to 0.1 and decays by a factor of 10 after 75 and 90 iterations. Results are averaged over 3 restarts while omitting one standard deviation. As can be seen by the results in Tables 1 and 2, DIAL presents consistent improvement in robustness (e.g., 5.75% improved robustness on SVHN against AA) compared to the standard AT while also improving the natural accuracy. More results are presented in Appendix B.
4.2 BENCHMARKING THE STATE-OF-THE-ART ROBUSTNESS
In this part, we evaluate the performance of DIAL compared to other state-of-the-art methods on CIFAR-10. We follow the same experiment setups as in Madry et al. (2017); Wang et al. (2019b);
Zhang et al. (2019b). When experiment settings are not identical between tested methods, we choose the most commonly used settings, and apply it to all experiments. This way, we keep the comparison as fair as possible and avoid reporting changes in results which are caused by inconsistent experiment settings (Pang et al., 2020a). To show that our results are not caused because of what is referred to as obfuscated gradients (Athalye et al., 2018), we evaluate our method with same setup as in our defense model, under strong attacks (e.g., PGD1000) in both white-box, black-box settings, AutoAttack (Croce & Hein, 2020), unforeseen "natural" corruptions (Hendrycks & Dietterich, 2018), and unforeseen adversaries. To make sure that the reported improvements are not caused by adversarial overfitting (Rice et al., 2020), we report best robust results for each method on average of 3 restarts, while omitting one standard deviation. Additional results for CIFAR-10 as well as comprehansive evaluation on MNIST can be found in Appendix D and E.
CIFAR-10 setup. We use the wide residual network (WRN-34-10) (Zagoruyko & Komodakis, 2016) architecture. Sidelong this architecture, we integrate a domain classification layer. To generate the adversarial domain dataset, we use a perturbation size of = 0.031. We apply 10 of inner maximization iterations with perturbation step size of 0.007. Batch size is set to 128, weight decay is set to 7e−4, and the model is trained for 100 epochs. Similar to the other methods, the initial learning rate was set to 0.1, and decays by a factor of 10 at iteration 75 and 90. We also introduce a version of our method that incorporates the AWP double-perturbation mechanism, named DIAL-AWP. For black-box attacks, we used two types of surrogate models (1) surrogate model trained independently without adversarial training, with natural accuracy of 95.61% and (2) surrogate model trained using one of the adversarial training methods. Additional training details can be found in Appendix C.
White-box/Black-box robustness. As reported in Table 3, our method achieves better robustness over the other state-of-the-art methods with respect to the different attacks. Specifically, in white-box settings, we see that our method improves robustness over Madry et al. (2017) by more than 2%, and roughly 2% over TRADES using the common PGD20 attack while keeping higher natural accuracy. We also observe better natural accuracy of 1.65% over MART while also achieving better robustness over all attacks. Moreover, our method presents significant improvement of up to 15% compared to the the domain invariant method suggested by Song et al. (2018) (ATDA). When incorporating AWP, our method improves the TRADES-AWP variant by almost 2% Additional results are available in Appendix E. When tested on black-box settings, DIALCE presents a significant improvement of more than 4.4% over the second-best performing method, and up to 13%. In Table 4, we also present the black-box results when the source model is taken from one of the adversarially trained models. In addition to the improvement in black-box robustness, DIALCE also manages to achieve better clean accuracy of more than 4.5% over the second-best performing method. Moreover, based on the
auto-attack leader-board 1, our method achieves the 1st place among models without additional data using the WRN-34-10 architecture.
4.2.1 ROBUSTNESS TO UNFORESEEN ATTACKS AND CORRUPTIONS
Unforeseen Adversaries. To further demonstrate the effectiveness of our approach, we test our method against various adversaries that were not used during the training process. We attack the model under the white-box settings with `2-PGD, `1-PGD, `∞-DeepFool and `2-DeepFool (Moosavi-Dezfooli et al., 2016) adversaries using Foolbox (Rauber et al., 2017). We applied commonly used attack budget (perturbation for PGD adversaries and overshot for DeepFool adversaries) with 20 iterations for the PGD adversaries and 50 for the DeepFool adversaries. Results are presented in Table 5. As can be seen by the results, our approach gains an improvement of up to 4.73% over the second best method under the various attack types and average improvement of 3.7% over all threat models.
Unforeseen Corruptions. We further demonstrate that our method consistently holds against unforeseen "natural" corruptions, consists of 18 unforeseen diverse corruption types proposed by Hendrycks & Dietterich (2018) on CIFAR-10, which we refer to as CIFAR10-C. The CIFAR10-C benchmark covers noise, blur, weather, and digital categories. As can be shown in Figure 3, our method gains a significant and consistent improvement over all the other methods. Our approach leads to an average improvement of 4.7% with minimum improvement of 3.5% and maximum improvement of 5.9% compared to the second best method over all unforeseen attacks. See Appendix F for the full experiment results.
4.2.2 TRANSFER LEARNING
Recent works (Salman et al., 2020; Utrera et al., 2020) suggested that robust models transfer better on standard downstream classification tasks. In Table 6 we demonstrate the advantage of our method when applied for transfer learning across CIFAR10 and CIFAR100 using the common linear evaluation protocol. see Appendix G for detailed experiment settings.
4.2.3 ABLATION STUDIES
In this part, we conduct ablation studies to further investigate the contribution of the additional domain head component introduced in our method. Experiment configuration are as in 4.2, and robust accuracy is reported on white-box PGD20. We use the CIFAR-10 dataset and train WRN-3410. We remove the domain head from both DIALKL and DIALCE (equivalent to r = 0) and report the natural and robust accuracy. We perform 3 random restarts and omit one standard deviation from the results. Results are presented in Figure 4. Both DIAL variants exhibits stable improvements on both natural accuracy and robust accuracy. DIALCE and DIALKL present an improvement of 1.82% and 0.33% on natural accuracy and 2.5% and 1.87% on robust accuracy, respectively.
4.2.4 VISUALIZING DIAL
To further illustrate our method, we visualize the model outputs using the different methods under natural test data and adversarial test data generated using PGD20 white-box attack with step size 0.003 and = 0.031 on CIFAR-10. Figure 5 shows the embedding received after applying t-SNE (Van der Maaten & Hinton, 2008) with two components on the model output for our method and for TRADES. DIAL seems to preserve strong separation between classes on both natural test data and adversarial test data. Additional illustrations for the other methods are attached in Appendix H.
4.3 BALANCED MEASUREMENT FOR ROBUST AND NATURAL ACCURACY
One of the goals of our method is to better balance between robust and natural accuracy under a given model. For a balanced metric, we adopt the idea of F1-score, which is the harmonic mean between the precision and recall. However, rather than using precision and recall, we measure the F1-score between robustness and natural accuracy, using a measure we call the F1-robust score.
F1-robust = true_robust
true_robust + 12 (false_robust + false_natural) ,
where true_robust are the adversarial examples that were correctly classified, false_robust are the adversarial examples that where miss-classified, and false_natural are the natural examples that were miss-classified. We tested the proposed F1-robust score using PGD20 on CIFAR-10 dataset in whitebox and black-box settings. Results are presented in Table 7 and show that our method achieves the best F1-robust score in both settings, which supports our findings from previous sections.
5 CONCLUSION
In this paper, we investigated the hypothesis that domain invariant representation can be beneficial for robust learning. With this idea in mind, we proposed a new adversarial learning method, called Domain Invariant Adversarial Learning (DIAL) that incorporates Domain Adversarial Neural Network into the adversarial training process. The proposed method is generic and can be combined with any network architecture in a wide range of tasks. Our evaluation process included strong adversaries , unforeseen adversaries , unforeseen corruptions, transfer learning tasks, and ablation studies. Using the extensive empirical analysis, we demonstrate the significant and consistent improvement obtained by DIAL in both robustness and natural accuracy compared to other defence methods on benchmark datasets.
ETHICS STATEMENT
We proposed DIAL to improve models’ robustness against adversarial attacks. We hope that it will help in building more secure models for real-world applications. DIAL is comparable to the stateof-the-art methods we tested in terms of training times and other resources. That said, this work is not without limitations: adversarial training is still a computationally expensive procedure that requires extra computations compared to standard training, with the concomitant environmental costs. Even though our method introduced better standard accuracy, adversarial training still degrades the standard accuracy. Moreover, models are trained to be robust using well known threat models such as the bounded `p norms. However, once a model is deployed, we cannot control the type of attacks it faces from sophisticated adversaries. Thus, the general problem is still very far from being fully solved.
REPRODUCIBILITY STATEMENT
In this paper, great efforts were made to ensure that comparison is fair, and all necessary information for reproducibility is present. Section 4 and Appendix B and C contains all experiment settings for SVHN, CIFAR-10 and CIFAR-100 experiments. Appendix D contains all experiment details and results for MNIST experiments. Appendix G contains experiment settings for the transfer learning experiment. In the supplementary material, we provided the source code to train and evaluate DIAL.
A DOMAIN INVARIANT ADVERSARIAL LEARNING ALGORITHM
Algorithm 1 describes a pseudo-code of our proposed DIALCE variant. As can be seen, a target domain batch is not given in advance as with standard domain-adaptation task. Instead, for each natural batch we generate a target batch using adversarial training. The loss function is composed of natural and adversarial losses with respect to the main task (e.g., classification), and from natural and adversarial domain losses. By maximizing the losses on the domain we aim to learn a feature representation which is invariant to the natural and adversarial domain, and therefore more robust.
Algorithm 1: Domain Invariant Adversarial Learning Input: Source data S = {(xi, yi)}ni=1 and network architecture Gf , Gy, Gd Parameters: Batch size m, perturbation size , pgd attack step size τ , adversarial trade-off λ,
initial reversal ratio r, and step size α Initialize: Y0 and Y1 source and target domain vectors filled with 0 and 1 respectively Output: Robust network G = (Gf , Gy, Gd) parameterized by θ̂ = (θf , θy, θd) respectively
B ADDITIONAL RESULTS ON CIFAR-100 AND SVHN
White-box Black-Box
Defense Model Natural PGD20 PGD100 PGD1000 CW∞ PGD20 PGD100 PGD1000 CW∞ AA
TRADES 90.35 57.10 54.13 54.08 52.19 86.89 86.73 86.57 86.70 49.5 DIALKL (Ours) 90.66 58.91 55.30 55.11 53.67 87.62 87.52 87.41 87.63 51.00 DIALCE (Ours) 92.88 55.26 50.82 50.54 49.66 89.12 89.01 88.74 89.10 46.52
C CIFAR-10 ADDITIONAL EXPERIMENTAL SETUP DETAILS
Additional defence setup. For being consistent with other methods, the natural images are padded with 4-pixel padding with 32-random crop and random horizontal flip. Furthermore, all methods are trained using SGD with momentum 0.9. For DIALKL, we balance the robust loss with λ = 6 and the domains loss with r = 4. For DIALCE, we balance the robust loss with λ = 1 and the domains loss with r = 2. For DIAL-AWP, we used the same learning rate schedule used in Wu et al. (2020), where the initial 0.1 learning rate decays by a factor of 10 after 100 and 150 iterations.
D BENCHMARKING THE STATE-OF-THE-ART ON MNIST
Defence setup. We use the same CNN architecture as used in Zhang et al. (2019b) which consists of four convolutional layers and three fully-connected layers. Sidelong this architecture, we integrate a domain classification layer. To generate the adversarial domain dataset, we use a perturbation size of = 0.3. We apply 40 iterations of inner maximization with perturbation step size of 0.01. Batch size is set to 128 and the model is trained for 100 epochs. Similar to the other methods, the initial learning rate was set to 0.01, and decays by a factor of 10 after 55 iterations, 75 and 90 iterations. All the models in the experiment are trained using SGD with momentum 0.9. For our method, we balance the robust loss with λ = 6 and the domains loss with r = 0.1.
White-box/Black-box robustness. We evaluate all defense models using PGD40, PGD100, PGD1000 and CW∞ (`∞ version of Carlini & Wagner (2017b) attack optimized by PGD-100) with step size 0.01. We constrain all attacks by the same perturbation = 0.3. For our black-box setting, we use a naturally trained surrogate model with natural accuracy of 99.51%. As reported in Table 10, our method achieves improved robustness over the other methods under the different attack types, while preserving the same level of natural accuracy, and even surpassing the naturally trained model. We should note that in general, the improvement margin on MNIST is more moderate compared to CIFAR-10, since MNIST is an easier task than CIFAR-10 and the robustness range is already high to begin with. Additional results are available in Appendix E.
E ADDITIONAL RESULTS ON MNIST AND CIFAR-10
In Table 11 we present additional results using the PGD1000 threat model. We use step size of 0.003 and constrain the attacks by the same perturbation = 0.031. Table 12 presents a comparison of our method combined with AWP to other the variants of AWP that were presented in Wu et al. (2020). In addition, in Table 13 we add the F1-robust scores for different variants of AWP.
F EXTENED RESULTS ON UNFORESEEN CORRUPTIONS
We present full accuracy results against unforeseen corruptions in Tables 14 and 15. We also visualize it in Figure 6.
Table 14: Accuracy (%) against unforeseen corruptions.
Defense Model brightness defocus blur fog glass blur jpeg compression motion blur saturate snow speckle noise
TRADES 82.63 80.04 60.19 78.00 82.81 76.49 81.53 80.68 80.14 MART 80.76 78.62 56.78 76.60 81.26 74.58 80.74 78.22 79.42 AT 83.30 80.42 60.22 77.90 82.73 76.64 82.31 80.37 80.74 ATDA 72.67 69.36 45.52 64.88 73.22 63.47 72.07 68.76 72.27 DIAL (Ours) 87.14 84.84 66.08 81.82 87.07 81.20 86.45 84.18 84.94
Table 15: Accuracy (%) against unforeseen corruptions.
G TRANSFER LEARNING SETTINGS
The models used are the same models from previous experiments. We follow the common procedure of “fixed-feature” setting, where only a linear layer on top of the pre-trained network is trained. We train a linear classifier on CIFAR-100 on top of the pre-trained network which was trained on CIFAR-10. We also train a linear classifier on CIFAR-10 on top of the pre-trained nwork which was trained on CIFAR-100. We train the linear classifier for 100 epochs, and an initial learning rate of 0.1 which is decayed by a factor of 10 at epochs 50 and 75. We used SGD optimizer with momentum 0.9.
H EXTENDED VISUALIZATIONS
In Figure 8, we provide additional visualizations of the different adversarial training methods presented above. We visualize the models outputs using t-SNE with two components on the natural test data and the corresponding adversarial test data generated using PGD20 white-box attack with step size 0.003 and = 0.031 on CIFAR-10.
In Figure 7 we visualize statistical differences between natural and adversarial examples in the feature representation layer. Specifically, we show the differences in mean and std on thirty random feature values from the feature representation layer as we pass through a network the natural test examples and their corresponding adversarial examples. We present the results on same network architecture (WRN-34-10), trained using three different training procedures: naturally trained network, network trained using standard adversarial training (AT) Madry et al. (2017), and DIAL on the CIFAR-10 dataset. When the statistical characteristics of each feature differ from each other, it implies that the features layer is less domain invariant. That is, smaller differences in mean/std yields a better invariance to adversarial examples. One can observe that for DIAL, there is almost no differences between the mean/std of natural examples and their corresponding adversarial examples. Moreover, for the vast majority of the features, DIAL present smaller differences compared to the naturally trained model and the model trained with standard adversarial training. Best viewed in colors. | 1. What is the focus of the paper regarding feature representation?
2. What is the main contribution of the proposed DIAL method?
3. What are the strengths of the paper, particularly in terms of its experiments?
4. Do you have any concerns about the novelty of the methodology?
5. How does the reviewer suggest improving the understanding of the reversal-ratio hyperparameter? | Summary Of The Paper
Review | Summary Of The Paper
This paper proposes a domain invariant adversarial training (DIAL) method, which learns the feature representation that is both robust and domain invariant. Apart from the label classifier, the model is equipped with a domain classifier that constrains the model not to discriminate between natural examples and adversarial examples, thus achieving a more robust feature representation. Extensive experiments on image classification benchmark the robustness compared to other state-of-the-art methods.
Review
This paper proposes a simple and effective adversarial learning method DIAL, which brings the idea from domain adaptation for robust representation.
Strengths:
This paper is well-written and easy to follow.
It conducts various experiments to demonstrate the effectiveness of the proposed method ranging from robustness to white-box attacks, black-box attacks, unforeseen adversaries, unforeseen corruptions and transfer learning. The experimental results are solid and technically sound.
Weaknesses:
From my point of view, the novelty of the methodology is not enough, as the domain classifier and the gradient reversal layer are the same with those methods in domain adaptation such as [1].
To better understanding the reversal-ratio hyper-parameter
r
, can the authors provides the robustness under different values of
r
.
[1] Yaroslav Ganin and Victor Lempitsky. Unsupervised domain adaptation by backpropagation. In International conference on machine learning, pp. 1180–1189. PMLR, 2015. |
ICLR | Title
AdaGCN: Adaboosting Graph Convolutional Networks into Deep Models
Abstract
The design of deep graph models still remains to be investigated and the crucial part is how to explore and exploit the knowledge from different hops of neighbors in an efficient way. In this paper, we propose a novel RNN-like deep graph neural network architecture by incorporating AdaBoost into the computation of network; and the proposed graph convolutional network called AdaGCN (Adaboosting Graph Convolutional Network) has the ability to efficiently extract knowledge from high-order neighbors of current nodes and then integrates knowledge from different hops of neighbors into the network in an Adaboost way. Different from other graph neural networks that directly stack many graph convolution layers, AdaGCN shares the same base neural network architecture among all “layers” and is recursively optimized, which is similar to an RNN. Besides, We also theoretically established the connection between AdaGCN and existing graph convolutional methods, presenting the benefits of our proposal. Finally, extensive experiments demonstrate the consistent state-of-the-art prediction performance on graphs across different label rates and the computational advantage of our approach AdaGCN 1.
1 INTRODUCTION
Recently, research related to learning on graph structural data has gained considerable attention in machine learning community. Graph neural networks (Gori et al., 2005; Hamilton et al., 2017; Veličković et al., 2018), particularly graph convolutional networks (Kipf & Welling, 2017; Defferrard et al., 2016; Bruna et al., 2014) have demonstrated their remarkable ability on node classification (Kipf & Welling, 2017), link prediction (Zhu et al., 2016) and clustering tasks (Fortunato, 2010). Despite their enormous success, almost all of these models have shallow model architectures with only two or three layers. The shallow design of GCN appears counterintuitive as deep versions of these models, in principle, have access to more information, but perform worse. Oversmoothing (Li et al., 2018) has been proposed to explain why deep GCN fails, showing that by repeatedly applying Laplacian smoothing, GCN may mix the node features from different clusters and makes them indistinguishable. This also indicates that by stacking too many graph convolutional layers, the embedding of each node in GCN is inclined to converge to certain value (Li et al., 2018), making it harder for classification. These shallow model architectures restricted by oversmoothing issue ∗Corresponding author. 1Code is available at https://github.com/datake/AdaGCN.
limit their ability to extract the knowledge from high-order neighbors, i.e., features from remote hops of neighbors for current nodes. Therefore, it is crucial to design deep graph models such that high-order information can be aggregated in an effective way for better predictions.
There are some works (Xu et al., 2018b; Liao et al., 2019; Klicpera et al., 2018; Li et al., 2019; Liu et al., 2020) that tried to address this issue partially, and the discussion can refer to Appendix A.1. By contrast, we argue that a key direction of constructing deep graph models lies in the efficient exploration and effective combination of information from different orders of neighbors. Due to the apparent sequential relationship between different orders of neighbors, it is a natural choice to incorporate boosting algorithm into the design of deep graph models. As an important realization of boosting theory, AdaBoost (Freund et al., 1999) is extremely easy to implement and keeps competitive in terms of both practical performance and computational cost (Hastie et al., 2009). Moreover, boosting theory has been used to analyze the success of ResNets in computer vision (Huang et al., 2018) and AdaGAN (Tolstikhin et al., 2017) has already successfully incorporated boosting algorithm into the training of GAN (Goodfellow et al., 2014).
In this work, we focus on incorporating AdaBoost into the design of deep graph convolutional networks in a non-trivial way. Firstly, in pursuit of the introduction of AdaBoost framework, we refine the type of graph convolutions and thus obtain a novel RNN-like GCN architecture called AdaGCN. Our approach can efficiently extract knowledge from different orders of neighbors and then combine these information in an AdaBoost manner with iterative updating of the node weights. Also, we compare our AdaGCN with existing methods from the perspective of both architectural difference and feature representation power to show the benefits of our method. Finally, we conduct extensive experiments to demonstrate the consistent state-of-the-art performance of our approach across different label rates and computational advantage over other alternatives.
2 OUR APPROACH: ADAGCN
2.1 ESTABLISHMENT OF ADAGCN
Consider an undirected graph G = (V, E) with N nodes vi ∈ V , edges (vi, vj) ∈ E . A ∈ RN×N is the adjacency matrix with corresponding degree matrix Dii = ∑ j Aij . In the vanilla GCN model (Kipf & Welling, 2017) for semi-supervised node classification, the graph embedding of nodes with two convolutional layers is formulated as:
Z = Â ReLU(ÂXW (0))W (1) (1)
where Z ∈ RN×K is the final embedding matrix (output logits) of nodes before softmax and K is the number of classes. X ∈ RN×C denotes the feature matrix where C is the input dimension.  = D̃− 1 2 ÃD̃− 1 2 where à = A + I and D̃ is the degree matrix of Ã. In addition, W (0) ∈ RC×H is the input-to-hidden weight matrix for a hidden layer with H feature maps and W (1) ∈ RH×K is the hidden-to-output weight matrix.
Our key motivation of constructing deep graph models is to efficiently explore information of highorder neighbors and then combine these messages from different orders of neighbors in an AdaBoost way. Nevertheless, if we naively extract information from high-order neighbors based on GCN, we are faced with stacking l layers’ parameter matrix W (i), i = 0, ..., l − 1, which is definitely costly in computation. Besides, Multi-Scale Deep Graph Convolutional Networks (Luan et al., 2019) also theoretically demonstrated that the output can only contain the stationary information of graph structure and loses all the local information in nodes for being smoothed if we simply deepen GCN. Intuitively, the desirable representation of node features does not necessarily need too many nonlinear transformation f applied on them. This is simply due to the fact that the feature of each node is normally one-dimensional sparse vector rather than multi-dimensional data structures, e.g., images, that intuitively need deep convolution network to extract high-level representation for vision tasks. This insight has been empirically demonstrated in many recent works (Wu et al., 2019; Klicpera et al., 2018; Xu et al., 2018a), showing that a two-layer fully-connected neural networks is a better choice in the implementation. Similarly, our AdaGCN also follows this direction by choosing an appropriate f in each layer rather than directly deepen GCN layers.
Thus, we propose to remove ReLU to avoid the expensive joint optimization of multiple parameter matrices. Similarly, Simplified Graph Convolution (SGC) (Wu et al., 2019) also adopted this prac-
tice, arguing that nonlinearity between GCN layers is not crucial and the majority of the benefits arises from local weighting of neighboring features. Then the simplified graph convolution is:
Z = ÂlXW (0)W (1) · · ·W (l−1) = ÂlXW̃, (2)
where we collapse W (0)W (1) · · ·W (l−1) as W̃ and Âl denotes  to the l-th power. In particular, one crucial impact of ReLU in GCN is to accelerate the convergence of matrix multiplication since the ReLU is a contraction mapping intuitively. Thus, the removal of ReLU operation could also alleviate the oversmoothing issue, i.e. slowering the convergence of node embedding to indistinguishable ones (Li et al., 2018). Additionally, without ReLU this simplified graph convolution is also able to avoid the aforementioned joint optimization over multiple parameter matrices, resulting in computational benefits. Nevertheless, we find that this type of stacked linear transformation from graph convolution has insufficient power in representing information of high-order neighbors, which is revealed in our experiment described in Appendix A.2. Therefore, we propose to utilize an appropriate nonlinear function fθ, e.g., a two-layer fully-connected neural network, to replace the linear transformation W̃ in Eq. 2 and enhance the representation ability of each base classifier in AdaGCN as follows:
Z(l) = fθ(Â lX), (3)
where Z(l) represents the final embedding matrix (output logits before Softmax) after the l-th base classifier in AdaGCN. This formulation also implies that the l-th base classifier in AdaGCN is extracting knowledge from features of current nodes and their l-th hop of neighbors. Due to the fact that the function of l-th base classifier in AdaGCN is similar to that of the l-th layer in other traditional GCN-based methods that directly stack many graph convolutional layers, we regard the whole part of l-th base classifier as the l-th layers in AdaGCN. As for the realization of Multi-class AdaBoost, we apply SAMME (Stagewise Additive Modeling using a Multi-class Exponential loss function) algorithm (Hastie et al., 2009), a natural and clean multi-class extension of the two-class AdaBoost adaptively combining weak classifiers.
As illustrated in Figure 1, we apply base classifier f (l)θ to extract knowledge from current node feature and l-th hop of neighbors by minimizing current weighted loss. Then we directly compute the weighted error rate err(l) and corresponding weight α(l) of current base classifier f (l)θ as follows:
err(l) = n∑ i=1 wiI ( ci 6= f (l)θ (xi) ) / n∑ i=1 wi
α(l) = log 1− err(l)
err(l) + log(K − 1),
(4)
where wi denotes the weight of i-th node and ci represents the category of current i-th node. To attain a positive α(l), we only need (1 − err(l)) > 1/K, i.e., the accuracy of each weak classifier
should be better than random guess (Hastie et al., 2009). This can be met easily to guarantee the weights to be updated in the right direction. Then we adjust nodes’ weights by increasing weights on incorrectly classified ones:
wi ← wi · exp ( α(l) · I ( ci 6= f (l)θ (xi) )) , i = 1, . . . , n (5)
After re-normalizing the weights, we then compute Âl+1X = Â · (ÂlX) to sequentially extract knowledge from l+1-th hop of neighbors in the following base classifier f (l+1)θ . One crucial point of AdaGCN is that different from traditional AdaBoost, we only define one fθ, e.g. a two-layer fully connected neural network, which in practice is recursively optimized in each base classifier just similar to a recurrent neural network. This also indicates that the parameters from last base classifier are leveraged as the initialization of next base classifier, which coincides with our intuition that l+1-th hop of neighbors are directly connected from l-th hop of neighbors. The efficacy of this kind of layer-wise training has been similarly verified in (Belilovsky et al., 2018) recently. Further, we combine the predictions from different orders of neighbors in an Adaboost way to obtain the final prediction C(A,X):
C(A,X) = argmax k L∑ l=0 α(l)f (l) θ (Â lX) (6)
Finally, we obtain the concise form of AdaGCN in the following:
ÂlX = Â · (Âl−1X)
Z(l) = f (l) θ (Â lX)
Z = AdaBoost(Z(l))
(7)
Note that fθ is non-linear, rather than linear in SGC (Wu et al., 2019), to guarantee the representation power. As shown in Figure 1, the architecture of AdaGCN is a variant of RNN with synchronous sequence input and output. Although the same classifier architecture is adopted for f (l)θ , their parameters are different, which is different from vanilla RNN. We provide a detailed description of the our algorithm in Section 3.
2.2 COMPARISON WITH EXISTING METHODS
Architectural Difference. As illustrated in Figure 1 and 2, there is an apparent difference among the architectures of GCN (Kipf & Welling, 2017), SGC (Wu et al., 2019), Jumping Knowledge (JK) (Xu et al., 2018b) and AdaGCN. Compared with these existing graph convolutional approaches that sequentially convey intermediate result Z(l) to compute final prediction, our AdaGCN transmits weights of nodes wi, aggregated features of different hops of neighbors ÂlX . More importantly, in AdaGCN the embedding Z(l) is independent of the flow of computation in the network and the sparse adjacent matrix  is also not directly involved in the computation of individual network because we compute
Â(l+1)X in advance and then feed it instead of  into the classifier f (l+1)θ , thus yielding significant computation reduction, which will be discussed further in Section 3.
Connection with PPNP and APPNP. We also established a strong connection between AdaGCN and previous state-of-the-art Personalized Propagation of Neural Predictions (PPNP) and Approximate PPNP (APPNP) (Klicpera et al., 2018) method that leverages personalized pagerank to reconstruct graph convolutions in order to use information from a large and adjustable neighborhood. The analysis can be summarized in the following Proposition 1. Proof can refer to Appendix A.3.
Proposition 1. Suppose that γ is the teleport factor. Let matrix sequence {Z(l)} be from the output of each layer l in AdaGCN, then PPNP is equivalent to the Exponential Moving Average (EMA) with exponentially decreasing factor γ on {Z(l)} in a sharing parameters version, and its approximate version APPNP can be viewed as the approximated form of EMA with a limited number of terms.
Proposition 1 illustrates that AdaGCN can be viewed as an adaptive form of APPNP, formulated as:
Z = L∑ l=0 α(l)f (l) θ (Â lX) (8)
Specifically, the first discrepancy between AdaGCN and APPNP lies in the adaptive coefficient α(l) in AdaGCN determined by the error of l-th base classifier f (l)θ rather than fixed exponentially decreased weights in APPNP. In addition, AdaGCN employs classifier f (l)θ with different parameters to learn the embedding of different orders of neighbors, while APPNP shares these parameters in its form. We verified this benefit of our approach in our experiments shown in Section 4.2.
Comparison with MixHop MixHop (Abu-El-Haija et al., 2019) applied the similar way of graph convolution by repeatedly mixing feature representations of neighbors at various distance. Proposition 2 proves that both AdaGCN and MixHop are able to represent feature differences among neighbors while previous GCNs-based methods cannot. Proof can refer to Appendix A.4. Recap the definition of general layer-wise Neighborhood Mixing (Abu-El-Haija et al., 2019) as follows: Definition 1. General layer-wise Neighborhood Mixing: A graph convolution network has the ability to represent the layer-wise neighborhood mixing if for any b0, b1, ..., bL, there exists an injective mapping f with a setting of its parameters, such that the output of this graph convolution network can express the following formula:
f ( L∑ l=0 blσ ( ÂlX )) (9)
Proposition 2. AdaGCNs defined by our proposed approach (Eq. equation 7) are capable of representing general layer-wise neighborhood mixing, i.e., can meet the Definition 1.
Albeit the similarity, AdaGCN distinguishes from MixHop in many aspects. Firstly, MixHop concatenates all outputs from each order of neighbors while we combines these predictions in an Adaboost way, which has theoretical generalization guarantee based on boosting theory Hastie et al. (2009). Oono & Suzuki (2020) have recently derived the optimization and generalization guarantees of multi-scale GNNs, serving as the theoretical backbone of AdaGCN. Meantime, MixHop allows full linear mixing of different orders of neighboring features, while AdaGCN utilizes different nonlinear transformation f (l)θ among all layers, enjoying stronger expressive power.
3 ALGORITHM
In practice, we employ SAMME.R (Hastie et al., 2009), the soft version of SAMME, in AdaGCN. SAMME.R (R for Real) algorithm (Hastie et al., 2009) leverages real-valued confidence-rated predictions, i.e., weighted probability estimates, rather than predicted hard labels in SAMME, in the prediction combination, which has demonstrated a better generalization and faster convergence than SAMME. We elaborate the final version of AdaGCN in Algorithm 1. We provide the analysis on the choice of model depth L in Appendix A.7, and then we elaborate the computational advantage of AdaGCN in the following.
Analysis of Computational Advantage. Due to the similarity of graph convolution in MixHop (Abu-El-Haija et al., 2019), AdaGCN also requires no additional memory or computational complexity compared with previous GCN models. Meanwhile, our approach enjoys huge computational advantage compared with GCN-based models, e.g., PPNP and APPNP, stemming from excluding the additional computation involved in sparse tensors, such as the sparse tensor multiplication between  and other dense tensors, in the forward and backward propagation of the neural network. Specifically, there are only L times sparse tensor operations for an AdaGCN model with L layers, i.e., ÂlX =  · (Âl−1X) for each layer l. This operation in each layer yields a dense tensor
Algorithm 1 AdaGCN based on SAMME.R Algorithm Input: Features Matrix X , normalized adjacent matrix Â, a two-layer fully connected network fθ, number of layers L and number of classes K. Output: Final combined prediction C(A,X).
1: Initialize the node weights wi = 1/n, i = 1, 2, ..., n on training set, neighbors feature matrix X̂(0) = X and classifier f (−1)θ . 2: for l = 0 to L do 3: Fit the graph convolutional classifier f (l)θ on neighbor feature matrix X̂
(l) based on f (l−1)θ by minimizing current weighted loss.
4: Obtain the weighted probability estimates p(l)(X̂(l)) for f (l)θ : p (l) k (X̂ (l)) = Softmax(f (l)θ (c = k|X̂ (l))), k = 1, . . . ,K 5: Compute the individual prediction h(l)k (x) for the current graph convolutional classifier f (l) θ :
h (l) k (X̂ (l))← (K − 1)
( log p
(l) k (X̂ (l))− 1 K ∑ k′ log p (l) k′ (X̂ (l)) ) where k = 1, . . . ,K.
6: Adjust the node weights wi for each node xi with label yi on training set: wi ← wi · exp ( −K − 1
K y>i log p (l) (xi)
) , i = 1, . . . , n
7: Re-normalize all weights wi. 8: Update l+1-hop neighbor feature matrix X̂(l+1): X̂(l+1) = ÂX̂(l) 9: end for
10: Combine all predictions h(l)k (X̂ (l)) for l = 0, ..., L.
C(A,X) = argmax k L∑ l=0 h (l) k (X̂ (l))
11: return Final combined prediction C(A,X).
Bl = ÂlX for the l-th layer, which is then fed into the computation in a two-layer fully-connected network, i.e., f (l)θ (B
l) = ReLU(BlW (0))W (1). Due to the fact that dense tensor Bl has been computed in advance, there is no other computation related to sparse tensors in the multiple forward and backward propagation procedures while training the neural network. By contrast, this multiple computation involved in sparse tensors in the GCN-based models, e.g., GCN: Â ReLU(ÂXW (0))W (1), is highly expensive. AdaGCN avoids these additional sparse tensor operations in the neural network and then attains huge computational efficiency. We demonstrate this viewpoint in the Section 4.3.
4 EXPERIMENTS
Experimental Setup. We select five commonly used graphs: CiteSeer, Cora-ML (Bojchevski & Günnemann, 2018; McCallum et al., 2000), PubMed (Sen et al., 2008), MS-Academic (Shchur et al., 2018) and Reddit. Dateset statistics are summarized in Table 1. Recent graph neural networks suffer from overfitting to a single splitting of training, validation and test datasets (Klicpera et al., 2018). To address this problem, inspired by (Klicpera et al., 2018), we test all approaches on multiple random splits and initialization to conduct a rigorous study. Detailed dataset splittings are provided in Appendix A.6.
Citeseer
Basic Setting of Baselines and AdaGCN. We compare AdaGCN with GCN (Kipf & Welling, 2017) and Simple Graph Convolution (SGC) (Wu et al., 2019) in Figure 3. In Table 2, we employ the same baselines as (Klicpera et al., 2018): V.GCN (vanilla GCN) (Kipf & Welling, 2017) and GCN with our early stopping, N-GCN (network of GCN) (Abu-El-Haija et al., 2018a), GAT (Graph Attention Networks) (Veličković et al., 2018), BT.FP (bootstrapped feature propagation) (Buchnik & Cohen, 2018) and JK (jumping knowledge networks with concatenation) (Xu et al., 2018b). In the computation part, we additionally compare AdaGCN with FastGCN (Chen et al., 2018) and GraphSAGE (Hamilton et al., 2017). We refer to the result of baselines from (Klicpera et al., 2018) and the implementation of AdaGCN is adapted from APPNP. For AdaGCN, after the line search on hyper-parameters, we set h = 5000 hidden units for the first four datasets except Ms-academic with h = 3000, and 15, 12, 20 and 5 layers respectively due to the different graph structures. In addition, we set dropout rate to 0 for Citeseer and Cora-ML datasets and 0.2 for the other datasets and 5×10−3L2 regularization on the first linear layer. We set weight decay as 1×10−3 for Citeseer while 1 × 10−4 for others. More detailed model parameters and analysis about our early stopping mechanism can be referred from Appendix A.6.
4.1 DESIGN OF DEEP GRAPH MODELS TO CIRCUMVENT OVERSMOOTHING EFFECT
It is well-known that GCN suffers from oversmoothing (Li et al., 2018) with the stacking of more graph convolutions. However, combination of knowledge from each layer to design deep graph
models is a reasonable method to circumvent oversmoothing issue. In our experiment, we aim to explore the prediction performance of GCN, GCN with residual connection (Kipf & Welling, 2017), SGC and our AdaGCN with a growing number of layers.
From Figure 3, it can be easily observed that oversmoothing leads to the rapid decreasing of accuracy for GCN (blue line) as the layer increases. In contrast, the speed of smoothing (green line) of SGC is much slower than GCN due to the lack of ReLU analyzed in Section 2.1. Similarly, GCN with residual connection (yellow line) partially mitigates the oversmoothing effect of original GCN but fails to take advantage of information from different orders of neighbors to improve the prediction performance constantly. Remarkably, AdaGCN (red line) is able to consistently enhance the performance with the increasing of layers across the three datasets. This implies that AdaGCN can efficiently incorporate knowledge from different orders of neighbors and circumvent oversmoothing of original GCN in the process of constructing deep graph models. In addition, the fluctuation of performance for AdaGCN is much lower than GCN especially when the number of layer is large.
4.2 PREDICTION PERFORMANCE
We conduct a rigorous study of AdaGCN on four datasets under multiple splittings of dataset. The results from Table 2 suggest the state-of-the-art performance of our approach and the improvement compared with APPNP validates the benefit of adaptive form for our AdaGCN. More rigorously, p values under paired t test demonstrate the significance of improvement for our method.
In the realistic setting, graphs usually have different labeled nodes and thus it is necessary to investigate the robust performance of methods on different number of labeled nodes. Here we utilize label rates to measure the different numbers of labeled nodes and then sample corresponding labeled nodes per class on graphs respectively. Table 3 presents the consistent state-of-the-art performance of AdaGCN under different label rates. An interesting manifestation from Table 3 is that AdaGCN yields more improvement on fewer label rates compared with APPNP, showing more efficiency on graphs with few labeled nodes. Inspired by the Layer Effect on graphs (Sun et al., 2019), we argue that the increase of layers in AdaGCN can result in more benefits on the efficient propagation of label signals especially on graphs with limited labeled nodes.
More rigorously, we additionally conduct the comparison on a larger dataset, i.e., Reddit. We choose the best layer as 4 due to the fact that AdaGCN with larger number of layers tends to suffer from overfitting on this relatively simple dataset (with high label rate 65.9%). Table 4 suggests that AdaGCN can still outperform other typical baselines, including V.GCN, PPNP and APPNP. More experimental details can be referred from Appendix A.6.
4.3 COMPUTATIONAL EFFICIENCY
Without the additional computational cost involved in sparse tensors in the propagation of the neural network, AdaGCN presents huge computational efficiency. From the left part of Figure 4, it exhibits that AdaGCN has the fastest speed of per-epoch training time in comparison with other methods except the comparative performance with FastGCN in Pubmed. In addition, there is a somewhat inconsistency in computation of FastGCN, with fastest speed in Pubmed but slower than
GCN on Cora-ML and MS-Academic datasets. Furthermore, with multiple power iterations involved in sparse tensors, APPNP unfortunately has relatively expensive computation cost. It should be noted that this computational advantage of AdaGCN is more significant when it comes to large datasets, e.g., Reddit. Table 4 demonstrates AdaGCN has the potential to perform much faster on larger datasets.
Besides, we explore the computational cost of ReLU and sparse adjacency tensor with respect to the number of layers in the right part of Figure 4. We focus on comparing AdaGCN with SGC and GCN as other GCN-based methods, such as GraphSAGE and APPNP, behave similarly with GCN. Particularly, we can easily observe that both SGC (green line) and GCN (red line) show a linear increasing tendency and GCN yields a larger slope arises from ReLU and more parameters. For SGC, stacking more layers directly is undesirable regarding the computation. Thus, a limited number of SGC layers is preferable with more advanced optimization techniques Wu et al. (2019). It also shows that the computational cost involved sparse matrices in neural networks plays a dominant role in all the cost especially when the layer is large enough. In contrast, our AdaGCN (pink line) displays an almost constant trend as the layer increases simply because it excludes the extra computation involved in sparse tensors Â, such as · · · Â ReLU(ÂXW (0))W (1) · · · , in the process of training neural networks. AdaGCN maintains the updating of parameters in the f (l)θ with a fixed architecture in each layer while the layer-wise optimization, therefore displaying a nearly constant computation cost within each epoch although more epochs are normally needed in the entire layer-wise training. We leave the analysis of exact time and memory complexity of AdaGCN as future works, but boosting-based algorithms including AdaGCN is memory-efficient (Oono & Suzuki, 2020).
5 DISCUSSIONS AND CONCLUSION
One potential concern is that AdaBoost (Hastie et al., 2009; Freund et al., 1999) is established on i.i.d. hypothesis while graphs have inherent data-dependent property. Fortunately, the statistical convergence and consistency of boosting (Lugosi & Vayatis, 2001; Mannor et al., 2003) can still be preserved when the samples are weakly dependent (Lozano et al., 2013). More discussion can refer to Appendix A.5. In this paper, we propose a novel RNN-like deep graph neural network architecture called AdaGCNs. With the delicate architecture design, our approach AdaGCN can effectively explore and exploit knowledge from different orders of neighbors in an Adaboost way. Our work paves a way towards better combining different-order neighbors to design deep graph models rather than only stacking on specific type of graph convolution.
ACKNOWLEDGMENTS
Z. Lin is supported by NSF China (grant no.s 61625301 and 61731018), Major Scientific Research Project of Zhejiang Lab (grant no.s 2019KB0AC01 and 2019KB0AB02), Beijing Academy of Artificial Intelligence, and Qualcomm.
A APPENDIX
A.1 RELATED WORKS ON DEEP GRAPH MODELS
A straightforward solution (Kipf & Welling, 2017; Xu et al., 2018b) inspired by ResNets (He et al., 2016) was by adding residual connections, but this practice was unsatisfactory both in prediction performance and computational efficiency towards building deep graph models, as shown in our experiments in Section 4.1 and 4.3. More recently, JK (Jumping Knowledge Networks (Xu et al., 2018b)) introduced jumping connections into final aggregation mechanism in order to extract knowledge from different layers of graph convolutions. However, this straightforward change of GCN architecture exhibited inconsistent empirical performance for different aggregation operators, which cannot demonstrate the successful construction of deep layers. In addition, Graph powering-based method (Jin et al., 2019) implicitly leveraged more spatial information by extending classical spectral graph theory to robust graph theory, but they concentrated on defending adversarial attacks rather than model depth. LanczosNet (Liao et al., 2019) utilized Lanczos algorithm to construct low rank approximations of the graph Laplacian and then can exploit multi-scale information. Moreover, APPNP (Approximate Personalized Propagation of Neural Predictions, (Klicpera et al., 2018)) leveraged the relationship between GCN and personalized PageRank to derive an improved global propagation scheme. Beyond these, DeepGCNs (Li et al., 2019) directly adapted residual, dense connection and dilated convolutions to GCN architecture, but it mainly focused on the task of point cloud semantic segmentation and has not demonstrated its effectiveness in typical graph tasks. Similar to our work, Deep Adaptive Graph Neural Network (DAGNN) (Liu et al., 2020) also focused on incorporating information from large receptive fields through the entanglement of representation transformation and propagation, while our work efficiently ensembles knowledge from large receptive fields in an Adaboost manner. Other related works based on global attention models (Puny et al., 2020) and sample-based methods (Zeng et al., 2019) are also helpful to construct deep graph models.
A.2 INSUFFICIENT REPRESENTATION POWER OF ADASGC
As illustrated in Figure 5, with the increasing of layers, AdaSGC with only linear transformation has insufficient representation power both in extracting knowledge from high-order neighbors and combining information from different orders of neighbors while AdaGCN exhibits a consistent improvement of performance as the layer increases.
A.3 PROOF OF PROPOSITION 1
Firstly, we further elaborate the Proposition 1 as follows, then we provide the proof.
Suppose that γ is the teleport factor. Consider the output ZPPNP = γ(I − (1 − γ)Â)−1fθ(X) in PPNP and ZAPPNP from its approxminated version APPNP. Let matrix sequence {Z(l)} be from the output of each layer l in AdaGCN, then PPNP is equivalent to the Exponential Moving Average (EMA) with exponentially decreasing factor γ, a first-order infinite impulse response filter, on {Z(l)} in a sharing parameters version, i.e., f (l)θ ≡ fθ . In addition, APPNP, which we reformulate in Eq. 10, can be viewed as the approximated form of EMA with a
limited number of terms.
ZAPPNP = (γ L−1∑ l=0 (1− γ)lÂl + (1− γ)LÂL)fθ(X) (10)
Proof. According to Neumann Theorem, ZPPNP can be expanded as a Neumann series:
ZPPNP = γ(I− (1− γ)Â)−1fθ(X)
= γ ∞∑ l=0 (1− γ)lÂlfθ(X),
where feature embedding matrix sequence {Z(l)} for each order of neighbors share the same parameters fθ . If we relax this sharing nature to the adaptive form with respect to the layer and put Âl into fθ , then the output Z can be approximately formulated as:
ZPPNP ≈ γ ∞∑ l=0 (1− γ)lf (l)θ (Â lX)
This relaxed version from PPNP is the Exponential Moving Average form of matrix sequence {Z(l)} with exponential decreasing factor γ. Moreover, if we approximate the EMA by truncating it after L− 1 items, then the weight omitted by stopping after L − 1 items is (1 − γ)L. Thus, the approximated EMA is exactly the APPNP form:
ZAPPNP = (γ L−1∑ l=0 (1− γ)lÂl + (1− γ)LÂL)fθ(X)
A.4 PROOF OF PROPOSITION 2
Proof. We consider a two layers fully-connected neural network as f in Eq. 8, then the output of AdaGCN can be formulated as:
Z = L∑ l=0 α(l)σ(ÂlXW (0))W (1)
Particularly, we set W (0) = bl sign(bl)α(l) I and W (1) = sign(bl)I where sign(bl) is the signed incidence scalar w.r.t bl. Then the output of AdaGCN can be presented as:
Z = L∑ l=0 α(l)σ(ÂlX bl sign(bl)α(l) I)sign(bl)I
= L∑ l=0 α(l)σ(ÂlX) bl sign(bl)α(l) sign(bl)
= L∑ l=0 blσ ( ÂlX ) The proof that GCNs-based methods are not capable of representing general layer-wise neighborhood mixing has been demonstrated in MixHop (Abu-El-Haija et al., 2019). Proposition 2 proved.
A.5 EXPLANATION ABOUT CONSISTENCY OF BOOSTING ON DEPENDENT DATA
Definition 2. (β-mixing sequences.) Let σji = σ(W ) = σ(Wi,Wi+1, ...,Wj) be the σ-field generated by a strictly stationary sequence of random variablesW = (Wi,Wi+1, ...,Wj). The β-mixing coefficient is defined by:
βW (n) = sup k
E sup {∣∣∣P(A|σk1)− P(A)∣∣∣ : A ∈ σ∞k+n}
Then a sequence W is called β-mixing if limn→∞βW (n) = 0. Further, it is algebraically β-mixing if there is a positive constant rβ such that βW (n) = O(n−rβ ). Definition 3. (Consistency) A classification rule is consistent for a certain distribution P if E(L(hn)) = P{hn(X) = Y } → a as n → ∞ where a is a constant. It is strongly Bayes-risk consistent if limn→∞L(hn) = a almost surely.
Under these definitions, the convergence and consistence of regularized boosting method on stationary βmixing sequences can be proved under mild assumptions. More details can be referred from (Lozano et al., 2013).
A.6 EXPERIMENTAL DETAILS
Early Stopping on AdaGCN. We apply the same early stopping mechanism across all the methods as (Klicpera et al., 2018) for fair comparison. Furthermore, boosting theory also has the capacity to perfectly incorporate early stopping and it has been shown that for several boosting algorithms including AdaBoost, this regularization via early stopping can provide guarantees of consistency (Zhang et al., 2005; Jiang et al., 2004; Bühlmann & Yu, 2003).
Dataset Splitting. We choose a training set of a fixed nodes per class, an early stopping set of 500 nodes and test set of remained nodes. Each experiment is run with 5 random initialization on each data split, leading to a total of 100 runs per experiment. On a standard setting, we randomly select 20 nodes per class. For the two different label rates on each graph, we select 6, 11 nodes per class on citeseer, 8, 16 nodes per class on Cora-ML, 7, 14 nodes per class on Pubmed and 8, 15 nodes per class on MS-Academic dataset.
Model parameters. For all GCN-based approaches, we use the same hyper-parameters in the original paper: learning rate of 0.01, 0.5 dropout rate, 5× 10−4 L2 regularization weight, and 16 hidden units. For FastGCN, we adopt the officially released code to conduct our experiments. PPNP and APPNP are adapted with best setting: K = 10 power iteration steps for APPNP, teleport probability γ = 0.1 on Cora-ML, Citeseer and Pubmed, γ = 0.2 on Ms-Academic. In addition, we use two layers with h = 64 hidden units and apply L2 regularization with λ = 5 × 10−3 on the weights of the first layer and use dropout with dropout rate d = 0.5 on both layers and the adjacency matrix. The early stopping criterion uses a patience of p = 100 and an (unreachably high) maximum of n = 10000 epochs.The implementation of AdaGCN is adapted from PPNP and APPNP. Corresponding patience p = 300 and n = 500 in the early stopping of AdaGCN. Moreover, SGC is re-implemented in a straightforward way without incorporating advanced optimization for better illustration and comparison. Other baselines are adopted the same parameters described in PPNP and APPNP.
Settings on Reddit dataset. By repeatedly tuning the parameters of these typical methods on Reddit, we finally choose weight decay rate as 10−4, hidden layer size 100 and epoch 20000 for AdaGCN. For APPNP, we opt weight decay rate as 10−5, dropout rate as 0 and epoch 500. V.GCN applies the same parameters in (Kipf & Welling, 2017) and we choose epoch as 500. All approaches have not deployed early stopping due to the expensive computational cost on the large Reddit dataset, which is also a fair comparison.
A.7 CHOICE OF THE NUMBER OF LAYERS
Different from the “forcible” behaviors in CNNs that directly stack many convolution layers, in our AdaGCN there is a theoretical guidance on the choice of model depth L, i.e., the number of base classifiers or layers, derived from boosting theory. Specifically, according to the boosting theory, the increasing of L can exponentially decreases the empirical loss, however, from the perspective of VC-dimension, an overly large L can yield overfitting of AdaGCN. It should be noted that the deeper graph convolution layers in AdaGCN are not always better, which indeed heavily depends on the the complexity of data. In practice, L can be determined via cross-validation. Specifically, we start a VC-dimension-based analysis to illustrate that too large L can yield overfitting of AdaGCN. For L layers of AdaGCN, its hypothesis set is
FL = { argmax
k ( L∑ l=1 α(l)f (l) θ ) : α(l) ∈ R, l ∈ [1, L] } (11)
Then the VC-dimension of FT can be bounded as follows in terms of the VC-dimension d of the family of base hypothesis: VCdim (FL) ≤ 2(d+ 1)(L+ 1) log2((L+ 1)e), (12) where e is a constant and the upper bounds grows as L increases. Combined with VC-dimension generalization bounds, these results imply that larger values of L can lead to overfitting of AdaBoost. This situation also happens in AdaGCN, which inspires us that there is no need to stack too many layers on AdaGCN in order to avoid overfitting. In practice, L is typically determined via cross-validation. | 1. What is the main contribution of the paper in the field of graph neural networks?
2. What are the strengths of the proposed approach, particularly in its ability to extract knowledge from high-order neighbors?
3. Are there any weaknesses or areas for improvement regarding the comparison with other methods?
4. Do you have any questions about the implementation or application of the proposed method? | Review | Review
##########################################################################
Summary: This paper incorporates AdaBoost into the deep graph neural network architecture, which has the ability to efficiently extract knowledge from high-order neighbors and then integrates knowledge from different hops of neighbors into the network in an Adaboost way. It solves the problem of oversmoothing. Extensive experiments show the effectiveness of the proposed method.
##########################################################################
Pro:
The paper is clear and well organized.
The introduction of AdaBoost into the deep GNN is novel and interesting.
The comparison with several existing methods is well analyzed in terms of both model architectures and computational advantages.
Extensive experiments are conducted to demonstrate the consistent state-of-the-art performance of the proposed method.
##########################################################################
Cons:
Although the same classifier architecture is adopted for
f
θ
(
l
)
, their parameters are different, which is different from RNN. It is better to avoid this confusion.
It would be better to include some discussion of the global attention methods (e.g., [Puny et al., 2020] ) and sampling-based methods (e.g., [Zeng et al., 2020]).
References: Puny et al. From Graph Low-Rank Global Attention to 2-FWL Approximation. ICML Workshop Graph Representation Learning and Beyond, 2020.
Zeng et al. Graph sampling-based inductive learning method. ICLR ’20, 2020. |
ICLR | Title
AdaGCN: Adaboosting Graph Convolutional Networks into Deep Models
Abstract
The design of deep graph models still remains to be investigated and the crucial part is how to explore and exploit the knowledge from different hops of neighbors in an efficient way. In this paper, we propose a novel RNN-like deep graph neural network architecture by incorporating AdaBoost into the computation of network; and the proposed graph convolutional network called AdaGCN (Adaboosting Graph Convolutional Network) has the ability to efficiently extract knowledge from high-order neighbors of current nodes and then integrates knowledge from different hops of neighbors into the network in an Adaboost way. Different from other graph neural networks that directly stack many graph convolution layers, AdaGCN shares the same base neural network architecture among all “layers” and is recursively optimized, which is similar to an RNN. Besides, We also theoretically established the connection between AdaGCN and existing graph convolutional methods, presenting the benefits of our proposal. Finally, extensive experiments demonstrate the consistent state-of-the-art prediction performance on graphs across different label rates and the computational advantage of our approach AdaGCN 1.
1 INTRODUCTION
Recently, research related to learning on graph structural data has gained considerable attention in machine learning community. Graph neural networks (Gori et al., 2005; Hamilton et al., 2017; Veličković et al., 2018), particularly graph convolutional networks (Kipf & Welling, 2017; Defferrard et al., 2016; Bruna et al., 2014) have demonstrated their remarkable ability on node classification (Kipf & Welling, 2017), link prediction (Zhu et al., 2016) and clustering tasks (Fortunato, 2010). Despite their enormous success, almost all of these models have shallow model architectures with only two or three layers. The shallow design of GCN appears counterintuitive as deep versions of these models, in principle, have access to more information, but perform worse. Oversmoothing (Li et al., 2018) has been proposed to explain why deep GCN fails, showing that by repeatedly applying Laplacian smoothing, GCN may mix the node features from different clusters and makes them indistinguishable. This also indicates that by stacking too many graph convolutional layers, the embedding of each node in GCN is inclined to converge to certain value (Li et al., 2018), making it harder for classification. These shallow model architectures restricted by oversmoothing issue ∗Corresponding author. 1Code is available at https://github.com/datake/AdaGCN.
limit their ability to extract the knowledge from high-order neighbors, i.e., features from remote hops of neighbors for current nodes. Therefore, it is crucial to design deep graph models such that high-order information can be aggregated in an effective way for better predictions.
There are some works (Xu et al., 2018b; Liao et al., 2019; Klicpera et al., 2018; Li et al., 2019; Liu et al., 2020) that tried to address this issue partially, and the discussion can refer to Appendix A.1. By contrast, we argue that a key direction of constructing deep graph models lies in the efficient exploration and effective combination of information from different orders of neighbors. Due to the apparent sequential relationship between different orders of neighbors, it is a natural choice to incorporate boosting algorithm into the design of deep graph models. As an important realization of boosting theory, AdaBoost (Freund et al., 1999) is extremely easy to implement and keeps competitive in terms of both practical performance and computational cost (Hastie et al., 2009). Moreover, boosting theory has been used to analyze the success of ResNets in computer vision (Huang et al., 2018) and AdaGAN (Tolstikhin et al., 2017) has already successfully incorporated boosting algorithm into the training of GAN (Goodfellow et al., 2014).
In this work, we focus on incorporating AdaBoost into the design of deep graph convolutional networks in a non-trivial way. Firstly, in pursuit of the introduction of AdaBoost framework, we refine the type of graph convolutions and thus obtain a novel RNN-like GCN architecture called AdaGCN. Our approach can efficiently extract knowledge from different orders of neighbors and then combine these information in an AdaBoost manner with iterative updating of the node weights. Also, we compare our AdaGCN with existing methods from the perspective of both architectural difference and feature representation power to show the benefits of our method. Finally, we conduct extensive experiments to demonstrate the consistent state-of-the-art performance of our approach across different label rates and computational advantage over other alternatives.
2 OUR APPROACH: ADAGCN
2.1 ESTABLISHMENT OF ADAGCN
Consider an undirected graph G = (V, E) with N nodes vi ∈ V , edges (vi, vj) ∈ E . A ∈ RN×N is the adjacency matrix with corresponding degree matrix Dii = ∑ j Aij . In the vanilla GCN model (Kipf & Welling, 2017) for semi-supervised node classification, the graph embedding of nodes with two convolutional layers is formulated as:
Z = Â ReLU(ÂXW (0))W (1) (1)
where Z ∈ RN×K is the final embedding matrix (output logits) of nodes before softmax and K is the number of classes. X ∈ RN×C denotes the feature matrix where C is the input dimension.  = D̃− 1 2 ÃD̃− 1 2 where à = A + I and D̃ is the degree matrix of Ã. In addition, W (0) ∈ RC×H is the input-to-hidden weight matrix for a hidden layer with H feature maps and W (1) ∈ RH×K is the hidden-to-output weight matrix.
Our key motivation of constructing deep graph models is to efficiently explore information of highorder neighbors and then combine these messages from different orders of neighbors in an AdaBoost way. Nevertheless, if we naively extract information from high-order neighbors based on GCN, we are faced with stacking l layers’ parameter matrix W (i), i = 0, ..., l − 1, which is definitely costly in computation. Besides, Multi-Scale Deep Graph Convolutional Networks (Luan et al., 2019) also theoretically demonstrated that the output can only contain the stationary information of graph structure and loses all the local information in nodes for being smoothed if we simply deepen GCN. Intuitively, the desirable representation of node features does not necessarily need too many nonlinear transformation f applied on them. This is simply due to the fact that the feature of each node is normally one-dimensional sparse vector rather than multi-dimensional data structures, e.g., images, that intuitively need deep convolution network to extract high-level representation for vision tasks. This insight has been empirically demonstrated in many recent works (Wu et al., 2019; Klicpera et al., 2018; Xu et al., 2018a), showing that a two-layer fully-connected neural networks is a better choice in the implementation. Similarly, our AdaGCN also follows this direction by choosing an appropriate f in each layer rather than directly deepen GCN layers.
Thus, we propose to remove ReLU to avoid the expensive joint optimization of multiple parameter matrices. Similarly, Simplified Graph Convolution (SGC) (Wu et al., 2019) also adopted this prac-
tice, arguing that nonlinearity between GCN layers is not crucial and the majority of the benefits arises from local weighting of neighboring features. Then the simplified graph convolution is:
Z = ÂlXW (0)W (1) · · ·W (l−1) = ÂlXW̃, (2)
where we collapse W (0)W (1) · · ·W (l−1) as W̃ and Âl denotes  to the l-th power. In particular, one crucial impact of ReLU in GCN is to accelerate the convergence of matrix multiplication since the ReLU is a contraction mapping intuitively. Thus, the removal of ReLU operation could also alleviate the oversmoothing issue, i.e. slowering the convergence of node embedding to indistinguishable ones (Li et al., 2018). Additionally, without ReLU this simplified graph convolution is also able to avoid the aforementioned joint optimization over multiple parameter matrices, resulting in computational benefits. Nevertheless, we find that this type of stacked linear transformation from graph convolution has insufficient power in representing information of high-order neighbors, which is revealed in our experiment described in Appendix A.2. Therefore, we propose to utilize an appropriate nonlinear function fθ, e.g., a two-layer fully-connected neural network, to replace the linear transformation W̃ in Eq. 2 and enhance the representation ability of each base classifier in AdaGCN as follows:
Z(l) = fθ(Â lX), (3)
where Z(l) represents the final embedding matrix (output logits before Softmax) after the l-th base classifier in AdaGCN. This formulation also implies that the l-th base classifier in AdaGCN is extracting knowledge from features of current nodes and their l-th hop of neighbors. Due to the fact that the function of l-th base classifier in AdaGCN is similar to that of the l-th layer in other traditional GCN-based methods that directly stack many graph convolutional layers, we regard the whole part of l-th base classifier as the l-th layers in AdaGCN. As for the realization of Multi-class AdaBoost, we apply SAMME (Stagewise Additive Modeling using a Multi-class Exponential loss function) algorithm (Hastie et al., 2009), a natural and clean multi-class extension of the two-class AdaBoost adaptively combining weak classifiers.
As illustrated in Figure 1, we apply base classifier f (l)θ to extract knowledge from current node feature and l-th hop of neighbors by minimizing current weighted loss. Then we directly compute the weighted error rate err(l) and corresponding weight α(l) of current base classifier f (l)θ as follows:
err(l) = n∑ i=1 wiI ( ci 6= f (l)θ (xi) ) / n∑ i=1 wi
α(l) = log 1− err(l)
err(l) + log(K − 1),
(4)
where wi denotes the weight of i-th node and ci represents the category of current i-th node. To attain a positive α(l), we only need (1 − err(l)) > 1/K, i.e., the accuracy of each weak classifier
should be better than random guess (Hastie et al., 2009). This can be met easily to guarantee the weights to be updated in the right direction. Then we adjust nodes’ weights by increasing weights on incorrectly classified ones:
wi ← wi · exp ( α(l) · I ( ci 6= f (l)θ (xi) )) , i = 1, . . . , n (5)
After re-normalizing the weights, we then compute Âl+1X = Â · (ÂlX) to sequentially extract knowledge from l+1-th hop of neighbors in the following base classifier f (l+1)θ . One crucial point of AdaGCN is that different from traditional AdaBoost, we only define one fθ, e.g. a two-layer fully connected neural network, which in practice is recursively optimized in each base classifier just similar to a recurrent neural network. This also indicates that the parameters from last base classifier are leveraged as the initialization of next base classifier, which coincides with our intuition that l+1-th hop of neighbors are directly connected from l-th hop of neighbors. The efficacy of this kind of layer-wise training has been similarly verified in (Belilovsky et al., 2018) recently. Further, we combine the predictions from different orders of neighbors in an Adaboost way to obtain the final prediction C(A,X):
C(A,X) = argmax k L∑ l=0 α(l)f (l) θ (Â lX) (6)
Finally, we obtain the concise form of AdaGCN in the following:
ÂlX = Â · (Âl−1X)
Z(l) = f (l) θ (Â lX)
Z = AdaBoost(Z(l))
(7)
Note that fθ is non-linear, rather than linear in SGC (Wu et al., 2019), to guarantee the representation power. As shown in Figure 1, the architecture of AdaGCN is a variant of RNN with synchronous sequence input and output. Although the same classifier architecture is adopted for f (l)θ , their parameters are different, which is different from vanilla RNN. We provide a detailed description of the our algorithm in Section 3.
2.2 COMPARISON WITH EXISTING METHODS
Architectural Difference. As illustrated in Figure 1 and 2, there is an apparent difference among the architectures of GCN (Kipf & Welling, 2017), SGC (Wu et al., 2019), Jumping Knowledge (JK) (Xu et al., 2018b) and AdaGCN. Compared with these existing graph convolutional approaches that sequentially convey intermediate result Z(l) to compute final prediction, our AdaGCN transmits weights of nodes wi, aggregated features of different hops of neighbors ÂlX . More importantly, in AdaGCN the embedding Z(l) is independent of the flow of computation in the network and the sparse adjacent matrix  is also not directly involved in the computation of individual network because we compute
Â(l+1)X in advance and then feed it instead of  into the classifier f (l+1)θ , thus yielding significant computation reduction, which will be discussed further in Section 3.
Connection with PPNP and APPNP. We also established a strong connection between AdaGCN and previous state-of-the-art Personalized Propagation of Neural Predictions (PPNP) and Approximate PPNP (APPNP) (Klicpera et al., 2018) method that leverages personalized pagerank to reconstruct graph convolutions in order to use information from a large and adjustable neighborhood. The analysis can be summarized in the following Proposition 1. Proof can refer to Appendix A.3.
Proposition 1. Suppose that γ is the teleport factor. Let matrix sequence {Z(l)} be from the output of each layer l in AdaGCN, then PPNP is equivalent to the Exponential Moving Average (EMA) with exponentially decreasing factor γ on {Z(l)} in a sharing parameters version, and its approximate version APPNP can be viewed as the approximated form of EMA with a limited number of terms.
Proposition 1 illustrates that AdaGCN can be viewed as an adaptive form of APPNP, formulated as:
Z = L∑ l=0 α(l)f (l) θ (Â lX) (8)
Specifically, the first discrepancy between AdaGCN and APPNP lies in the adaptive coefficient α(l) in AdaGCN determined by the error of l-th base classifier f (l)θ rather than fixed exponentially decreased weights in APPNP. In addition, AdaGCN employs classifier f (l)θ with different parameters to learn the embedding of different orders of neighbors, while APPNP shares these parameters in its form. We verified this benefit of our approach in our experiments shown in Section 4.2.
Comparison with MixHop MixHop (Abu-El-Haija et al., 2019) applied the similar way of graph convolution by repeatedly mixing feature representations of neighbors at various distance. Proposition 2 proves that both AdaGCN and MixHop are able to represent feature differences among neighbors while previous GCNs-based methods cannot. Proof can refer to Appendix A.4. Recap the definition of general layer-wise Neighborhood Mixing (Abu-El-Haija et al., 2019) as follows: Definition 1. General layer-wise Neighborhood Mixing: A graph convolution network has the ability to represent the layer-wise neighborhood mixing if for any b0, b1, ..., bL, there exists an injective mapping f with a setting of its parameters, such that the output of this graph convolution network can express the following formula:
f ( L∑ l=0 blσ ( ÂlX )) (9)
Proposition 2. AdaGCNs defined by our proposed approach (Eq. equation 7) are capable of representing general layer-wise neighborhood mixing, i.e., can meet the Definition 1.
Albeit the similarity, AdaGCN distinguishes from MixHop in many aspects. Firstly, MixHop concatenates all outputs from each order of neighbors while we combines these predictions in an Adaboost way, which has theoretical generalization guarantee based on boosting theory Hastie et al. (2009). Oono & Suzuki (2020) have recently derived the optimization and generalization guarantees of multi-scale GNNs, serving as the theoretical backbone of AdaGCN. Meantime, MixHop allows full linear mixing of different orders of neighboring features, while AdaGCN utilizes different nonlinear transformation f (l)θ among all layers, enjoying stronger expressive power.
3 ALGORITHM
In practice, we employ SAMME.R (Hastie et al., 2009), the soft version of SAMME, in AdaGCN. SAMME.R (R for Real) algorithm (Hastie et al., 2009) leverages real-valued confidence-rated predictions, i.e., weighted probability estimates, rather than predicted hard labels in SAMME, in the prediction combination, which has demonstrated a better generalization and faster convergence than SAMME. We elaborate the final version of AdaGCN in Algorithm 1. We provide the analysis on the choice of model depth L in Appendix A.7, and then we elaborate the computational advantage of AdaGCN in the following.
Analysis of Computational Advantage. Due to the similarity of graph convolution in MixHop (Abu-El-Haija et al., 2019), AdaGCN also requires no additional memory or computational complexity compared with previous GCN models. Meanwhile, our approach enjoys huge computational advantage compared with GCN-based models, e.g., PPNP and APPNP, stemming from excluding the additional computation involved in sparse tensors, such as the sparse tensor multiplication between  and other dense tensors, in the forward and backward propagation of the neural network. Specifically, there are only L times sparse tensor operations for an AdaGCN model with L layers, i.e., ÂlX =  · (Âl−1X) for each layer l. This operation in each layer yields a dense tensor
Algorithm 1 AdaGCN based on SAMME.R Algorithm Input: Features Matrix X , normalized adjacent matrix Â, a two-layer fully connected network fθ, number of layers L and number of classes K. Output: Final combined prediction C(A,X).
1: Initialize the node weights wi = 1/n, i = 1, 2, ..., n on training set, neighbors feature matrix X̂(0) = X and classifier f (−1)θ . 2: for l = 0 to L do 3: Fit the graph convolutional classifier f (l)θ on neighbor feature matrix X̂
(l) based on f (l−1)θ by minimizing current weighted loss.
4: Obtain the weighted probability estimates p(l)(X̂(l)) for f (l)θ : p (l) k (X̂ (l)) = Softmax(f (l)θ (c = k|X̂ (l))), k = 1, . . . ,K 5: Compute the individual prediction h(l)k (x) for the current graph convolutional classifier f (l) θ :
h (l) k (X̂ (l))← (K − 1)
( log p
(l) k (X̂ (l))− 1 K ∑ k′ log p (l) k′ (X̂ (l)) ) where k = 1, . . . ,K.
6: Adjust the node weights wi for each node xi with label yi on training set: wi ← wi · exp ( −K − 1
K y>i log p (l) (xi)
) , i = 1, . . . , n
7: Re-normalize all weights wi. 8: Update l+1-hop neighbor feature matrix X̂(l+1): X̂(l+1) = ÂX̂(l) 9: end for
10: Combine all predictions h(l)k (X̂ (l)) for l = 0, ..., L.
C(A,X) = argmax k L∑ l=0 h (l) k (X̂ (l))
11: return Final combined prediction C(A,X).
Bl = ÂlX for the l-th layer, which is then fed into the computation in a two-layer fully-connected network, i.e., f (l)θ (B
l) = ReLU(BlW (0))W (1). Due to the fact that dense tensor Bl has been computed in advance, there is no other computation related to sparse tensors in the multiple forward and backward propagation procedures while training the neural network. By contrast, this multiple computation involved in sparse tensors in the GCN-based models, e.g., GCN: Â ReLU(ÂXW (0))W (1), is highly expensive. AdaGCN avoids these additional sparse tensor operations in the neural network and then attains huge computational efficiency. We demonstrate this viewpoint in the Section 4.3.
4 EXPERIMENTS
Experimental Setup. We select five commonly used graphs: CiteSeer, Cora-ML (Bojchevski & Günnemann, 2018; McCallum et al., 2000), PubMed (Sen et al., 2008), MS-Academic (Shchur et al., 2018) and Reddit. Dateset statistics are summarized in Table 1. Recent graph neural networks suffer from overfitting to a single splitting of training, validation and test datasets (Klicpera et al., 2018). To address this problem, inspired by (Klicpera et al., 2018), we test all approaches on multiple random splits and initialization to conduct a rigorous study. Detailed dataset splittings are provided in Appendix A.6.
Citeseer
Basic Setting of Baselines and AdaGCN. We compare AdaGCN with GCN (Kipf & Welling, 2017) and Simple Graph Convolution (SGC) (Wu et al., 2019) in Figure 3. In Table 2, we employ the same baselines as (Klicpera et al., 2018): V.GCN (vanilla GCN) (Kipf & Welling, 2017) and GCN with our early stopping, N-GCN (network of GCN) (Abu-El-Haija et al., 2018a), GAT (Graph Attention Networks) (Veličković et al., 2018), BT.FP (bootstrapped feature propagation) (Buchnik & Cohen, 2018) and JK (jumping knowledge networks with concatenation) (Xu et al., 2018b). In the computation part, we additionally compare AdaGCN with FastGCN (Chen et al., 2018) and GraphSAGE (Hamilton et al., 2017). We refer to the result of baselines from (Klicpera et al., 2018) and the implementation of AdaGCN is adapted from APPNP. For AdaGCN, after the line search on hyper-parameters, we set h = 5000 hidden units for the first four datasets except Ms-academic with h = 3000, and 15, 12, 20 and 5 layers respectively due to the different graph structures. In addition, we set dropout rate to 0 for Citeseer and Cora-ML datasets and 0.2 for the other datasets and 5×10−3L2 regularization on the first linear layer. We set weight decay as 1×10−3 for Citeseer while 1 × 10−4 for others. More detailed model parameters and analysis about our early stopping mechanism can be referred from Appendix A.6.
4.1 DESIGN OF DEEP GRAPH MODELS TO CIRCUMVENT OVERSMOOTHING EFFECT
It is well-known that GCN suffers from oversmoothing (Li et al., 2018) with the stacking of more graph convolutions. However, combination of knowledge from each layer to design deep graph
models is a reasonable method to circumvent oversmoothing issue. In our experiment, we aim to explore the prediction performance of GCN, GCN with residual connection (Kipf & Welling, 2017), SGC and our AdaGCN with a growing number of layers.
From Figure 3, it can be easily observed that oversmoothing leads to the rapid decreasing of accuracy for GCN (blue line) as the layer increases. In contrast, the speed of smoothing (green line) of SGC is much slower than GCN due to the lack of ReLU analyzed in Section 2.1. Similarly, GCN with residual connection (yellow line) partially mitigates the oversmoothing effect of original GCN but fails to take advantage of information from different orders of neighbors to improve the prediction performance constantly. Remarkably, AdaGCN (red line) is able to consistently enhance the performance with the increasing of layers across the three datasets. This implies that AdaGCN can efficiently incorporate knowledge from different orders of neighbors and circumvent oversmoothing of original GCN in the process of constructing deep graph models. In addition, the fluctuation of performance for AdaGCN is much lower than GCN especially when the number of layer is large.
4.2 PREDICTION PERFORMANCE
We conduct a rigorous study of AdaGCN on four datasets under multiple splittings of dataset. The results from Table 2 suggest the state-of-the-art performance of our approach and the improvement compared with APPNP validates the benefit of adaptive form for our AdaGCN. More rigorously, p values under paired t test demonstrate the significance of improvement for our method.
In the realistic setting, graphs usually have different labeled nodes and thus it is necessary to investigate the robust performance of methods on different number of labeled nodes. Here we utilize label rates to measure the different numbers of labeled nodes and then sample corresponding labeled nodes per class on graphs respectively. Table 3 presents the consistent state-of-the-art performance of AdaGCN under different label rates. An interesting manifestation from Table 3 is that AdaGCN yields more improvement on fewer label rates compared with APPNP, showing more efficiency on graphs with few labeled nodes. Inspired by the Layer Effect on graphs (Sun et al., 2019), we argue that the increase of layers in AdaGCN can result in more benefits on the efficient propagation of label signals especially on graphs with limited labeled nodes.
More rigorously, we additionally conduct the comparison on a larger dataset, i.e., Reddit. We choose the best layer as 4 due to the fact that AdaGCN with larger number of layers tends to suffer from overfitting on this relatively simple dataset (with high label rate 65.9%). Table 4 suggests that AdaGCN can still outperform other typical baselines, including V.GCN, PPNP and APPNP. More experimental details can be referred from Appendix A.6.
4.3 COMPUTATIONAL EFFICIENCY
Without the additional computational cost involved in sparse tensors in the propagation of the neural network, AdaGCN presents huge computational efficiency. From the left part of Figure 4, it exhibits that AdaGCN has the fastest speed of per-epoch training time in comparison with other methods except the comparative performance with FastGCN in Pubmed. In addition, there is a somewhat inconsistency in computation of FastGCN, with fastest speed in Pubmed but slower than
GCN on Cora-ML and MS-Academic datasets. Furthermore, with multiple power iterations involved in sparse tensors, APPNP unfortunately has relatively expensive computation cost. It should be noted that this computational advantage of AdaGCN is more significant when it comes to large datasets, e.g., Reddit. Table 4 demonstrates AdaGCN has the potential to perform much faster on larger datasets.
Besides, we explore the computational cost of ReLU and sparse adjacency tensor with respect to the number of layers in the right part of Figure 4. We focus on comparing AdaGCN with SGC and GCN as other GCN-based methods, such as GraphSAGE and APPNP, behave similarly with GCN. Particularly, we can easily observe that both SGC (green line) and GCN (red line) show a linear increasing tendency and GCN yields a larger slope arises from ReLU and more parameters. For SGC, stacking more layers directly is undesirable regarding the computation. Thus, a limited number of SGC layers is preferable with more advanced optimization techniques Wu et al. (2019). It also shows that the computational cost involved sparse matrices in neural networks plays a dominant role in all the cost especially when the layer is large enough. In contrast, our AdaGCN (pink line) displays an almost constant trend as the layer increases simply because it excludes the extra computation involved in sparse tensors Â, such as · · · Â ReLU(ÂXW (0))W (1) · · · , in the process of training neural networks. AdaGCN maintains the updating of parameters in the f (l)θ with a fixed architecture in each layer while the layer-wise optimization, therefore displaying a nearly constant computation cost within each epoch although more epochs are normally needed in the entire layer-wise training. We leave the analysis of exact time and memory complexity of AdaGCN as future works, but boosting-based algorithms including AdaGCN is memory-efficient (Oono & Suzuki, 2020).
5 DISCUSSIONS AND CONCLUSION
One potential concern is that AdaBoost (Hastie et al., 2009; Freund et al., 1999) is established on i.i.d. hypothesis while graphs have inherent data-dependent property. Fortunately, the statistical convergence and consistency of boosting (Lugosi & Vayatis, 2001; Mannor et al., 2003) can still be preserved when the samples are weakly dependent (Lozano et al., 2013). More discussion can refer to Appendix A.5. In this paper, we propose a novel RNN-like deep graph neural network architecture called AdaGCNs. With the delicate architecture design, our approach AdaGCN can effectively explore and exploit knowledge from different orders of neighbors in an Adaboost way. Our work paves a way towards better combining different-order neighbors to design deep graph models rather than only stacking on specific type of graph convolution.
ACKNOWLEDGMENTS
Z. Lin is supported by NSF China (grant no.s 61625301 and 61731018), Major Scientific Research Project of Zhejiang Lab (grant no.s 2019KB0AC01 and 2019KB0AB02), Beijing Academy of Artificial Intelligence, and Qualcomm.
A APPENDIX
A.1 RELATED WORKS ON DEEP GRAPH MODELS
A straightforward solution (Kipf & Welling, 2017; Xu et al., 2018b) inspired by ResNets (He et al., 2016) was by adding residual connections, but this practice was unsatisfactory both in prediction performance and computational efficiency towards building deep graph models, as shown in our experiments in Section 4.1 and 4.3. More recently, JK (Jumping Knowledge Networks (Xu et al., 2018b)) introduced jumping connections into final aggregation mechanism in order to extract knowledge from different layers of graph convolutions. However, this straightforward change of GCN architecture exhibited inconsistent empirical performance for different aggregation operators, which cannot demonstrate the successful construction of deep layers. In addition, Graph powering-based method (Jin et al., 2019) implicitly leveraged more spatial information by extending classical spectral graph theory to robust graph theory, but they concentrated on defending adversarial attacks rather than model depth. LanczosNet (Liao et al., 2019) utilized Lanczos algorithm to construct low rank approximations of the graph Laplacian and then can exploit multi-scale information. Moreover, APPNP (Approximate Personalized Propagation of Neural Predictions, (Klicpera et al., 2018)) leveraged the relationship between GCN and personalized PageRank to derive an improved global propagation scheme. Beyond these, DeepGCNs (Li et al., 2019) directly adapted residual, dense connection and dilated convolutions to GCN architecture, but it mainly focused on the task of point cloud semantic segmentation and has not demonstrated its effectiveness in typical graph tasks. Similar to our work, Deep Adaptive Graph Neural Network (DAGNN) (Liu et al., 2020) also focused on incorporating information from large receptive fields through the entanglement of representation transformation and propagation, while our work efficiently ensembles knowledge from large receptive fields in an Adaboost manner. Other related works based on global attention models (Puny et al., 2020) and sample-based methods (Zeng et al., 2019) are also helpful to construct deep graph models.
A.2 INSUFFICIENT REPRESENTATION POWER OF ADASGC
As illustrated in Figure 5, with the increasing of layers, AdaSGC with only linear transformation has insufficient representation power both in extracting knowledge from high-order neighbors and combining information from different orders of neighbors while AdaGCN exhibits a consistent improvement of performance as the layer increases.
A.3 PROOF OF PROPOSITION 1
Firstly, we further elaborate the Proposition 1 as follows, then we provide the proof.
Suppose that γ is the teleport factor. Consider the output ZPPNP = γ(I − (1 − γ)Â)−1fθ(X) in PPNP and ZAPPNP from its approxminated version APPNP. Let matrix sequence {Z(l)} be from the output of each layer l in AdaGCN, then PPNP is equivalent to the Exponential Moving Average (EMA) with exponentially decreasing factor γ, a first-order infinite impulse response filter, on {Z(l)} in a sharing parameters version, i.e., f (l)θ ≡ fθ . In addition, APPNP, which we reformulate in Eq. 10, can be viewed as the approximated form of EMA with a
limited number of terms.
ZAPPNP = (γ L−1∑ l=0 (1− γ)lÂl + (1− γ)LÂL)fθ(X) (10)
Proof. According to Neumann Theorem, ZPPNP can be expanded as a Neumann series:
ZPPNP = γ(I− (1− γ)Â)−1fθ(X)
= γ ∞∑ l=0 (1− γ)lÂlfθ(X),
where feature embedding matrix sequence {Z(l)} for each order of neighbors share the same parameters fθ . If we relax this sharing nature to the adaptive form with respect to the layer and put Âl into fθ , then the output Z can be approximately formulated as:
ZPPNP ≈ γ ∞∑ l=0 (1− γ)lf (l)θ (Â lX)
This relaxed version from PPNP is the Exponential Moving Average form of matrix sequence {Z(l)} with exponential decreasing factor γ. Moreover, if we approximate the EMA by truncating it after L− 1 items, then the weight omitted by stopping after L − 1 items is (1 − γ)L. Thus, the approximated EMA is exactly the APPNP form:
ZAPPNP = (γ L−1∑ l=0 (1− γ)lÂl + (1− γ)LÂL)fθ(X)
A.4 PROOF OF PROPOSITION 2
Proof. We consider a two layers fully-connected neural network as f in Eq. 8, then the output of AdaGCN can be formulated as:
Z = L∑ l=0 α(l)σ(ÂlXW (0))W (1)
Particularly, we set W (0) = bl sign(bl)α(l) I and W (1) = sign(bl)I where sign(bl) is the signed incidence scalar w.r.t bl. Then the output of AdaGCN can be presented as:
Z = L∑ l=0 α(l)σ(ÂlX bl sign(bl)α(l) I)sign(bl)I
= L∑ l=0 α(l)σ(ÂlX) bl sign(bl)α(l) sign(bl)
= L∑ l=0 blσ ( ÂlX ) The proof that GCNs-based methods are not capable of representing general layer-wise neighborhood mixing has been demonstrated in MixHop (Abu-El-Haija et al., 2019). Proposition 2 proved.
A.5 EXPLANATION ABOUT CONSISTENCY OF BOOSTING ON DEPENDENT DATA
Definition 2. (β-mixing sequences.) Let σji = σ(W ) = σ(Wi,Wi+1, ...,Wj) be the σ-field generated by a strictly stationary sequence of random variablesW = (Wi,Wi+1, ...,Wj). The β-mixing coefficient is defined by:
βW (n) = sup k
E sup {∣∣∣P(A|σk1)− P(A)∣∣∣ : A ∈ σ∞k+n}
Then a sequence W is called β-mixing if limn→∞βW (n) = 0. Further, it is algebraically β-mixing if there is a positive constant rβ such that βW (n) = O(n−rβ ). Definition 3. (Consistency) A classification rule is consistent for a certain distribution P if E(L(hn)) = P{hn(X) = Y } → a as n → ∞ where a is a constant. It is strongly Bayes-risk consistent if limn→∞L(hn) = a almost surely.
Under these definitions, the convergence and consistence of regularized boosting method on stationary βmixing sequences can be proved under mild assumptions. More details can be referred from (Lozano et al., 2013).
A.6 EXPERIMENTAL DETAILS
Early Stopping on AdaGCN. We apply the same early stopping mechanism across all the methods as (Klicpera et al., 2018) for fair comparison. Furthermore, boosting theory also has the capacity to perfectly incorporate early stopping and it has been shown that for several boosting algorithms including AdaBoost, this regularization via early stopping can provide guarantees of consistency (Zhang et al., 2005; Jiang et al., 2004; Bühlmann & Yu, 2003).
Dataset Splitting. We choose a training set of a fixed nodes per class, an early stopping set of 500 nodes and test set of remained nodes. Each experiment is run with 5 random initialization on each data split, leading to a total of 100 runs per experiment. On a standard setting, we randomly select 20 nodes per class. For the two different label rates on each graph, we select 6, 11 nodes per class on citeseer, 8, 16 nodes per class on Cora-ML, 7, 14 nodes per class on Pubmed and 8, 15 nodes per class on MS-Academic dataset.
Model parameters. For all GCN-based approaches, we use the same hyper-parameters in the original paper: learning rate of 0.01, 0.5 dropout rate, 5× 10−4 L2 regularization weight, and 16 hidden units. For FastGCN, we adopt the officially released code to conduct our experiments. PPNP and APPNP are adapted with best setting: K = 10 power iteration steps for APPNP, teleport probability γ = 0.1 on Cora-ML, Citeseer and Pubmed, γ = 0.2 on Ms-Academic. In addition, we use two layers with h = 64 hidden units and apply L2 regularization with λ = 5 × 10−3 on the weights of the first layer and use dropout with dropout rate d = 0.5 on both layers and the adjacency matrix. The early stopping criterion uses a patience of p = 100 and an (unreachably high) maximum of n = 10000 epochs.The implementation of AdaGCN is adapted from PPNP and APPNP. Corresponding patience p = 300 and n = 500 in the early stopping of AdaGCN. Moreover, SGC is re-implemented in a straightforward way without incorporating advanced optimization for better illustration and comparison. Other baselines are adopted the same parameters described in PPNP and APPNP.
Settings on Reddit dataset. By repeatedly tuning the parameters of these typical methods on Reddit, we finally choose weight decay rate as 10−4, hidden layer size 100 and epoch 20000 for AdaGCN. For APPNP, we opt weight decay rate as 10−5, dropout rate as 0 and epoch 500. V.GCN applies the same parameters in (Kipf & Welling, 2017) and we choose epoch as 500. All approaches have not deployed early stopping due to the expensive computational cost on the large Reddit dataset, which is also a fair comparison.
A.7 CHOICE OF THE NUMBER OF LAYERS
Different from the “forcible” behaviors in CNNs that directly stack many convolution layers, in our AdaGCN there is a theoretical guidance on the choice of model depth L, i.e., the number of base classifiers or layers, derived from boosting theory. Specifically, according to the boosting theory, the increasing of L can exponentially decreases the empirical loss, however, from the perspective of VC-dimension, an overly large L can yield overfitting of AdaGCN. It should be noted that the deeper graph convolution layers in AdaGCN are not always better, which indeed heavily depends on the the complexity of data. In practice, L can be determined via cross-validation. Specifically, we start a VC-dimension-based analysis to illustrate that too large L can yield overfitting of AdaGCN. For L layers of AdaGCN, its hypothesis set is
FL = { argmax
k ( L∑ l=1 α(l)f (l) θ ) : α(l) ∈ R, l ∈ [1, L] } (11)
Then the VC-dimension of FT can be bounded as follows in terms of the VC-dimension d of the family of base hypothesis: VCdim (FL) ≤ 2(d+ 1)(L+ 1) log2((L+ 1)e), (12) where e is a constant and the upper bounds grows as L increases. Combined with VC-dimension generalization bounds, these results imply that larger values of L can lead to overfitting of AdaBoost. This situation also happens in AdaGCN, which inspires us that there is no need to stack too many layers on AdaGCN in order to avoid overfitting. In practice, L is typically determined via cross-validation. | 1. What is the focus of the paper in terms of graph convolutional networks?
2. What are the strengths of the proposed approach, particularly its efficiency and significance to the research community?
3. Do you have any concerns or questions regarding the comparison between MixHop and AdaGCN?
4. Can you explain the result shown in Figure 4 (Right) and provide further explanations? | Review | Review
Summary In this paper, the authors study graph convolutional networks, where they propose to use AdaBoost for Deep GCNs. This method makes it possible to use information from multi-hop neighbours. Computationally, the proposed method is efficient, which is illustrated through various experiments. The paper is well written with good clarity, while the proposed method is novel and significant to the research community.
Reasons for recommending acceptance
To that of reviewer's knowledge, the proposed scheme is novel. The method address the issue of using information for higher-order neighbours, without increasing the computational complexity.
comprehensive experiments across multiple datasets, evaluating AdaGCN in terms of computational efficiency, accuracy, dependency on the number of layers.
Questions
While comparing MixHop against AdaGCN, the authors mention that AdaGCN does have generalization guarantees from Boosting theory. This statement is loose, and a formal justification may be needed.
In Figure 4 (Right), where the epoch time is measured against the number of layers, AdaGCN is shown to have nearly constant time w.r.t. layers. Some more explanation would be useful in understanding this. |
ICLR | Title
AdaGCN: Adaboosting Graph Convolutional Networks into Deep Models
Abstract
The design of deep graph models still remains to be investigated and the crucial part is how to explore and exploit the knowledge from different hops of neighbors in an efficient way. In this paper, we propose a novel RNN-like deep graph neural network architecture by incorporating AdaBoost into the computation of network; and the proposed graph convolutional network called AdaGCN (Adaboosting Graph Convolutional Network) has the ability to efficiently extract knowledge from high-order neighbors of current nodes and then integrates knowledge from different hops of neighbors into the network in an Adaboost way. Different from other graph neural networks that directly stack many graph convolution layers, AdaGCN shares the same base neural network architecture among all “layers” and is recursively optimized, which is similar to an RNN. Besides, We also theoretically established the connection between AdaGCN and existing graph convolutional methods, presenting the benefits of our proposal. Finally, extensive experiments demonstrate the consistent state-of-the-art prediction performance on graphs across different label rates and the computational advantage of our approach AdaGCN 1.
1 INTRODUCTION
Recently, research related to learning on graph structural data has gained considerable attention in machine learning community. Graph neural networks (Gori et al., 2005; Hamilton et al., 2017; Veličković et al., 2018), particularly graph convolutional networks (Kipf & Welling, 2017; Defferrard et al., 2016; Bruna et al., 2014) have demonstrated their remarkable ability on node classification (Kipf & Welling, 2017), link prediction (Zhu et al., 2016) and clustering tasks (Fortunato, 2010). Despite their enormous success, almost all of these models have shallow model architectures with only two or three layers. The shallow design of GCN appears counterintuitive as deep versions of these models, in principle, have access to more information, but perform worse. Oversmoothing (Li et al., 2018) has been proposed to explain why deep GCN fails, showing that by repeatedly applying Laplacian smoothing, GCN may mix the node features from different clusters and makes them indistinguishable. This also indicates that by stacking too many graph convolutional layers, the embedding of each node in GCN is inclined to converge to certain value (Li et al., 2018), making it harder for classification. These shallow model architectures restricted by oversmoothing issue ∗Corresponding author. 1Code is available at https://github.com/datake/AdaGCN.
limit their ability to extract the knowledge from high-order neighbors, i.e., features from remote hops of neighbors for current nodes. Therefore, it is crucial to design deep graph models such that high-order information can be aggregated in an effective way for better predictions.
There are some works (Xu et al., 2018b; Liao et al., 2019; Klicpera et al., 2018; Li et al., 2019; Liu et al., 2020) that tried to address this issue partially, and the discussion can refer to Appendix A.1. By contrast, we argue that a key direction of constructing deep graph models lies in the efficient exploration and effective combination of information from different orders of neighbors. Due to the apparent sequential relationship between different orders of neighbors, it is a natural choice to incorporate boosting algorithm into the design of deep graph models. As an important realization of boosting theory, AdaBoost (Freund et al., 1999) is extremely easy to implement and keeps competitive in terms of both practical performance and computational cost (Hastie et al., 2009). Moreover, boosting theory has been used to analyze the success of ResNets in computer vision (Huang et al., 2018) and AdaGAN (Tolstikhin et al., 2017) has already successfully incorporated boosting algorithm into the training of GAN (Goodfellow et al., 2014).
In this work, we focus on incorporating AdaBoost into the design of deep graph convolutional networks in a non-trivial way. Firstly, in pursuit of the introduction of AdaBoost framework, we refine the type of graph convolutions and thus obtain a novel RNN-like GCN architecture called AdaGCN. Our approach can efficiently extract knowledge from different orders of neighbors and then combine these information in an AdaBoost manner with iterative updating of the node weights. Also, we compare our AdaGCN with existing methods from the perspective of both architectural difference and feature representation power to show the benefits of our method. Finally, we conduct extensive experiments to demonstrate the consistent state-of-the-art performance of our approach across different label rates and computational advantage over other alternatives.
2 OUR APPROACH: ADAGCN
2.1 ESTABLISHMENT OF ADAGCN
Consider an undirected graph G = (V, E) with N nodes vi ∈ V , edges (vi, vj) ∈ E . A ∈ RN×N is the adjacency matrix with corresponding degree matrix Dii = ∑ j Aij . In the vanilla GCN model (Kipf & Welling, 2017) for semi-supervised node classification, the graph embedding of nodes with two convolutional layers is formulated as:
Z = Â ReLU(ÂXW (0))W (1) (1)
where Z ∈ RN×K is the final embedding matrix (output logits) of nodes before softmax and K is the number of classes. X ∈ RN×C denotes the feature matrix where C is the input dimension.  = D̃− 1 2 ÃD̃− 1 2 where à = A + I and D̃ is the degree matrix of Ã. In addition, W (0) ∈ RC×H is the input-to-hidden weight matrix for a hidden layer with H feature maps and W (1) ∈ RH×K is the hidden-to-output weight matrix.
Our key motivation of constructing deep graph models is to efficiently explore information of highorder neighbors and then combine these messages from different orders of neighbors in an AdaBoost way. Nevertheless, if we naively extract information from high-order neighbors based on GCN, we are faced with stacking l layers’ parameter matrix W (i), i = 0, ..., l − 1, which is definitely costly in computation. Besides, Multi-Scale Deep Graph Convolutional Networks (Luan et al., 2019) also theoretically demonstrated that the output can only contain the stationary information of graph structure and loses all the local information in nodes for being smoothed if we simply deepen GCN. Intuitively, the desirable representation of node features does not necessarily need too many nonlinear transformation f applied on them. This is simply due to the fact that the feature of each node is normally one-dimensional sparse vector rather than multi-dimensional data structures, e.g., images, that intuitively need deep convolution network to extract high-level representation for vision tasks. This insight has been empirically demonstrated in many recent works (Wu et al., 2019; Klicpera et al., 2018; Xu et al., 2018a), showing that a two-layer fully-connected neural networks is a better choice in the implementation. Similarly, our AdaGCN also follows this direction by choosing an appropriate f in each layer rather than directly deepen GCN layers.
Thus, we propose to remove ReLU to avoid the expensive joint optimization of multiple parameter matrices. Similarly, Simplified Graph Convolution (SGC) (Wu et al., 2019) also adopted this prac-
tice, arguing that nonlinearity between GCN layers is not crucial and the majority of the benefits arises from local weighting of neighboring features. Then the simplified graph convolution is:
Z = ÂlXW (0)W (1) · · ·W (l−1) = ÂlXW̃, (2)
where we collapse W (0)W (1) · · ·W (l−1) as W̃ and Âl denotes  to the l-th power. In particular, one crucial impact of ReLU in GCN is to accelerate the convergence of matrix multiplication since the ReLU is a contraction mapping intuitively. Thus, the removal of ReLU operation could also alleviate the oversmoothing issue, i.e. slowering the convergence of node embedding to indistinguishable ones (Li et al., 2018). Additionally, without ReLU this simplified graph convolution is also able to avoid the aforementioned joint optimization over multiple parameter matrices, resulting in computational benefits. Nevertheless, we find that this type of stacked linear transformation from graph convolution has insufficient power in representing information of high-order neighbors, which is revealed in our experiment described in Appendix A.2. Therefore, we propose to utilize an appropriate nonlinear function fθ, e.g., a two-layer fully-connected neural network, to replace the linear transformation W̃ in Eq. 2 and enhance the representation ability of each base classifier in AdaGCN as follows:
Z(l) = fθ(Â lX), (3)
where Z(l) represents the final embedding matrix (output logits before Softmax) after the l-th base classifier in AdaGCN. This formulation also implies that the l-th base classifier in AdaGCN is extracting knowledge from features of current nodes and their l-th hop of neighbors. Due to the fact that the function of l-th base classifier in AdaGCN is similar to that of the l-th layer in other traditional GCN-based methods that directly stack many graph convolutional layers, we regard the whole part of l-th base classifier as the l-th layers in AdaGCN. As for the realization of Multi-class AdaBoost, we apply SAMME (Stagewise Additive Modeling using a Multi-class Exponential loss function) algorithm (Hastie et al., 2009), a natural and clean multi-class extension of the two-class AdaBoost adaptively combining weak classifiers.
As illustrated in Figure 1, we apply base classifier f (l)θ to extract knowledge from current node feature and l-th hop of neighbors by minimizing current weighted loss. Then we directly compute the weighted error rate err(l) and corresponding weight α(l) of current base classifier f (l)θ as follows:
err(l) = n∑ i=1 wiI ( ci 6= f (l)θ (xi) ) / n∑ i=1 wi
α(l) = log 1− err(l)
err(l) + log(K − 1),
(4)
where wi denotes the weight of i-th node and ci represents the category of current i-th node. To attain a positive α(l), we only need (1 − err(l)) > 1/K, i.e., the accuracy of each weak classifier
should be better than random guess (Hastie et al., 2009). This can be met easily to guarantee the weights to be updated in the right direction. Then we adjust nodes’ weights by increasing weights on incorrectly classified ones:
wi ← wi · exp ( α(l) · I ( ci 6= f (l)θ (xi) )) , i = 1, . . . , n (5)
After re-normalizing the weights, we then compute Âl+1X = Â · (ÂlX) to sequentially extract knowledge from l+1-th hop of neighbors in the following base classifier f (l+1)θ . One crucial point of AdaGCN is that different from traditional AdaBoost, we only define one fθ, e.g. a two-layer fully connected neural network, which in practice is recursively optimized in each base classifier just similar to a recurrent neural network. This also indicates that the parameters from last base classifier are leveraged as the initialization of next base classifier, which coincides with our intuition that l+1-th hop of neighbors are directly connected from l-th hop of neighbors. The efficacy of this kind of layer-wise training has been similarly verified in (Belilovsky et al., 2018) recently. Further, we combine the predictions from different orders of neighbors in an Adaboost way to obtain the final prediction C(A,X):
C(A,X) = argmax k L∑ l=0 α(l)f (l) θ (Â lX) (6)
Finally, we obtain the concise form of AdaGCN in the following:
ÂlX = Â · (Âl−1X)
Z(l) = f (l) θ (Â lX)
Z = AdaBoost(Z(l))
(7)
Note that fθ is non-linear, rather than linear in SGC (Wu et al., 2019), to guarantee the representation power. As shown in Figure 1, the architecture of AdaGCN is a variant of RNN with synchronous sequence input and output. Although the same classifier architecture is adopted for f (l)θ , their parameters are different, which is different from vanilla RNN. We provide a detailed description of the our algorithm in Section 3.
2.2 COMPARISON WITH EXISTING METHODS
Architectural Difference. As illustrated in Figure 1 and 2, there is an apparent difference among the architectures of GCN (Kipf & Welling, 2017), SGC (Wu et al., 2019), Jumping Knowledge (JK) (Xu et al., 2018b) and AdaGCN. Compared with these existing graph convolutional approaches that sequentially convey intermediate result Z(l) to compute final prediction, our AdaGCN transmits weights of nodes wi, aggregated features of different hops of neighbors ÂlX . More importantly, in AdaGCN the embedding Z(l) is independent of the flow of computation in the network and the sparse adjacent matrix  is also not directly involved in the computation of individual network because we compute
Â(l+1)X in advance and then feed it instead of  into the classifier f (l+1)θ , thus yielding significant computation reduction, which will be discussed further in Section 3.
Connection with PPNP and APPNP. We also established a strong connection between AdaGCN and previous state-of-the-art Personalized Propagation of Neural Predictions (PPNP) and Approximate PPNP (APPNP) (Klicpera et al., 2018) method that leverages personalized pagerank to reconstruct graph convolutions in order to use information from a large and adjustable neighborhood. The analysis can be summarized in the following Proposition 1. Proof can refer to Appendix A.3.
Proposition 1. Suppose that γ is the teleport factor. Let matrix sequence {Z(l)} be from the output of each layer l in AdaGCN, then PPNP is equivalent to the Exponential Moving Average (EMA) with exponentially decreasing factor γ on {Z(l)} in a sharing parameters version, and its approximate version APPNP can be viewed as the approximated form of EMA with a limited number of terms.
Proposition 1 illustrates that AdaGCN can be viewed as an adaptive form of APPNP, formulated as:
Z = L∑ l=0 α(l)f (l) θ (Â lX) (8)
Specifically, the first discrepancy between AdaGCN and APPNP lies in the adaptive coefficient α(l) in AdaGCN determined by the error of l-th base classifier f (l)θ rather than fixed exponentially decreased weights in APPNP. In addition, AdaGCN employs classifier f (l)θ with different parameters to learn the embedding of different orders of neighbors, while APPNP shares these parameters in its form. We verified this benefit of our approach in our experiments shown in Section 4.2.
Comparison with MixHop MixHop (Abu-El-Haija et al., 2019) applied the similar way of graph convolution by repeatedly mixing feature representations of neighbors at various distance. Proposition 2 proves that both AdaGCN and MixHop are able to represent feature differences among neighbors while previous GCNs-based methods cannot. Proof can refer to Appendix A.4. Recap the definition of general layer-wise Neighborhood Mixing (Abu-El-Haija et al., 2019) as follows: Definition 1. General layer-wise Neighborhood Mixing: A graph convolution network has the ability to represent the layer-wise neighborhood mixing if for any b0, b1, ..., bL, there exists an injective mapping f with a setting of its parameters, such that the output of this graph convolution network can express the following formula:
f ( L∑ l=0 blσ ( ÂlX )) (9)
Proposition 2. AdaGCNs defined by our proposed approach (Eq. equation 7) are capable of representing general layer-wise neighborhood mixing, i.e., can meet the Definition 1.
Albeit the similarity, AdaGCN distinguishes from MixHop in many aspects. Firstly, MixHop concatenates all outputs from each order of neighbors while we combines these predictions in an Adaboost way, which has theoretical generalization guarantee based on boosting theory Hastie et al. (2009). Oono & Suzuki (2020) have recently derived the optimization and generalization guarantees of multi-scale GNNs, serving as the theoretical backbone of AdaGCN. Meantime, MixHop allows full linear mixing of different orders of neighboring features, while AdaGCN utilizes different nonlinear transformation f (l)θ among all layers, enjoying stronger expressive power.
3 ALGORITHM
In practice, we employ SAMME.R (Hastie et al., 2009), the soft version of SAMME, in AdaGCN. SAMME.R (R for Real) algorithm (Hastie et al., 2009) leverages real-valued confidence-rated predictions, i.e., weighted probability estimates, rather than predicted hard labels in SAMME, in the prediction combination, which has demonstrated a better generalization and faster convergence than SAMME. We elaborate the final version of AdaGCN in Algorithm 1. We provide the analysis on the choice of model depth L in Appendix A.7, and then we elaborate the computational advantage of AdaGCN in the following.
Analysis of Computational Advantage. Due to the similarity of graph convolution in MixHop (Abu-El-Haija et al., 2019), AdaGCN also requires no additional memory or computational complexity compared with previous GCN models. Meanwhile, our approach enjoys huge computational advantage compared with GCN-based models, e.g., PPNP and APPNP, stemming from excluding the additional computation involved in sparse tensors, such as the sparse tensor multiplication between  and other dense tensors, in the forward and backward propagation of the neural network. Specifically, there are only L times sparse tensor operations for an AdaGCN model with L layers, i.e., ÂlX =  · (Âl−1X) for each layer l. This operation in each layer yields a dense tensor
Algorithm 1 AdaGCN based on SAMME.R Algorithm Input: Features Matrix X , normalized adjacent matrix Â, a two-layer fully connected network fθ, number of layers L and number of classes K. Output: Final combined prediction C(A,X).
1: Initialize the node weights wi = 1/n, i = 1, 2, ..., n on training set, neighbors feature matrix X̂(0) = X and classifier f (−1)θ . 2: for l = 0 to L do 3: Fit the graph convolutional classifier f (l)θ on neighbor feature matrix X̂
(l) based on f (l−1)θ by minimizing current weighted loss.
4: Obtain the weighted probability estimates p(l)(X̂(l)) for f (l)θ : p (l) k (X̂ (l)) = Softmax(f (l)θ (c = k|X̂ (l))), k = 1, . . . ,K 5: Compute the individual prediction h(l)k (x) for the current graph convolutional classifier f (l) θ :
h (l) k (X̂ (l))← (K − 1)
( log p
(l) k (X̂ (l))− 1 K ∑ k′ log p (l) k′ (X̂ (l)) ) where k = 1, . . . ,K.
6: Adjust the node weights wi for each node xi with label yi on training set: wi ← wi · exp ( −K − 1
K y>i log p (l) (xi)
) , i = 1, . . . , n
7: Re-normalize all weights wi. 8: Update l+1-hop neighbor feature matrix X̂(l+1): X̂(l+1) = ÂX̂(l) 9: end for
10: Combine all predictions h(l)k (X̂ (l)) for l = 0, ..., L.
C(A,X) = argmax k L∑ l=0 h (l) k (X̂ (l))
11: return Final combined prediction C(A,X).
Bl = ÂlX for the l-th layer, which is then fed into the computation in a two-layer fully-connected network, i.e., f (l)θ (B
l) = ReLU(BlW (0))W (1). Due to the fact that dense tensor Bl has been computed in advance, there is no other computation related to sparse tensors in the multiple forward and backward propagation procedures while training the neural network. By contrast, this multiple computation involved in sparse tensors in the GCN-based models, e.g., GCN: Â ReLU(ÂXW (0))W (1), is highly expensive. AdaGCN avoids these additional sparse tensor operations in the neural network and then attains huge computational efficiency. We demonstrate this viewpoint in the Section 4.3.
4 EXPERIMENTS
Experimental Setup. We select five commonly used graphs: CiteSeer, Cora-ML (Bojchevski & Günnemann, 2018; McCallum et al., 2000), PubMed (Sen et al., 2008), MS-Academic (Shchur et al., 2018) and Reddit. Dateset statistics are summarized in Table 1. Recent graph neural networks suffer from overfitting to a single splitting of training, validation and test datasets (Klicpera et al., 2018). To address this problem, inspired by (Klicpera et al., 2018), we test all approaches on multiple random splits and initialization to conduct a rigorous study. Detailed dataset splittings are provided in Appendix A.6.
Citeseer
Basic Setting of Baselines and AdaGCN. We compare AdaGCN with GCN (Kipf & Welling, 2017) and Simple Graph Convolution (SGC) (Wu et al., 2019) in Figure 3. In Table 2, we employ the same baselines as (Klicpera et al., 2018): V.GCN (vanilla GCN) (Kipf & Welling, 2017) and GCN with our early stopping, N-GCN (network of GCN) (Abu-El-Haija et al., 2018a), GAT (Graph Attention Networks) (Veličković et al., 2018), BT.FP (bootstrapped feature propagation) (Buchnik & Cohen, 2018) and JK (jumping knowledge networks with concatenation) (Xu et al., 2018b). In the computation part, we additionally compare AdaGCN with FastGCN (Chen et al., 2018) and GraphSAGE (Hamilton et al., 2017). We refer to the result of baselines from (Klicpera et al., 2018) and the implementation of AdaGCN is adapted from APPNP. For AdaGCN, after the line search on hyper-parameters, we set h = 5000 hidden units for the first four datasets except Ms-academic with h = 3000, and 15, 12, 20 and 5 layers respectively due to the different graph structures. In addition, we set dropout rate to 0 for Citeseer and Cora-ML datasets and 0.2 for the other datasets and 5×10−3L2 regularization on the first linear layer. We set weight decay as 1×10−3 for Citeseer while 1 × 10−4 for others. More detailed model parameters and analysis about our early stopping mechanism can be referred from Appendix A.6.
4.1 DESIGN OF DEEP GRAPH MODELS TO CIRCUMVENT OVERSMOOTHING EFFECT
It is well-known that GCN suffers from oversmoothing (Li et al., 2018) with the stacking of more graph convolutions. However, combination of knowledge from each layer to design deep graph
models is a reasonable method to circumvent oversmoothing issue. In our experiment, we aim to explore the prediction performance of GCN, GCN with residual connection (Kipf & Welling, 2017), SGC and our AdaGCN with a growing number of layers.
From Figure 3, it can be easily observed that oversmoothing leads to the rapid decreasing of accuracy for GCN (blue line) as the layer increases. In contrast, the speed of smoothing (green line) of SGC is much slower than GCN due to the lack of ReLU analyzed in Section 2.1. Similarly, GCN with residual connection (yellow line) partially mitigates the oversmoothing effect of original GCN but fails to take advantage of information from different orders of neighbors to improve the prediction performance constantly. Remarkably, AdaGCN (red line) is able to consistently enhance the performance with the increasing of layers across the three datasets. This implies that AdaGCN can efficiently incorporate knowledge from different orders of neighbors and circumvent oversmoothing of original GCN in the process of constructing deep graph models. In addition, the fluctuation of performance for AdaGCN is much lower than GCN especially when the number of layer is large.
4.2 PREDICTION PERFORMANCE
We conduct a rigorous study of AdaGCN on four datasets under multiple splittings of dataset. The results from Table 2 suggest the state-of-the-art performance of our approach and the improvement compared with APPNP validates the benefit of adaptive form for our AdaGCN. More rigorously, p values under paired t test demonstrate the significance of improvement for our method.
In the realistic setting, graphs usually have different labeled nodes and thus it is necessary to investigate the robust performance of methods on different number of labeled nodes. Here we utilize label rates to measure the different numbers of labeled nodes and then sample corresponding labeled nodes per class on graphs respectively. Table 3 presents the consistent state-of-the-art performance of AdaGCN under different label rates. An interesting manifestation from Table 3 is that AdaGCN yields more improvement on fewer label rates compared with APPNP, showing more efficiency on graphs with few labeled nodes. Inspired by the Layer Effect on graphs (Sun et al., 2019), we argue that the increase of layers in AdaGCN can result in more benefits on the efficient propagation of label signals especially on graphs with limited labeled nodes.
More rigorously, we additionally conduct the comparison on a larger dataset, i.e., Reddit. We choose the best layer as 4 due to the fact that AdaGCN with larger number of layers tends to suffer from overfitting on this relatively simple dataset (with high label rate 65.9%). Table 4 suggests that AdaGCN can still outperform other typical baselines, including V.GCN, PPNP and APPNP. More experimental details can be referred from Appendix A.6.
4.3 COMPUTATIONAL EFFICIENCY
Without the additional computational cost involved in sparse tensors in the propagation of the neural network, AdaGCN presents huge computational efficiency. From the left part of Figure 4, it exhibits that AdaGCN has the fastest speed of per-epoch training time in comparison with other methods except the comparative performance with FastGCN in Pubmed. In addition, there is a somewhat inconsistency in computation of FastGCN, with fastest speed in Pubmed but slower than
GCN on Cora-ML and MS-Academic datasets. Furthermore, with multiple power iterations involved in sparse tensors, APPNP unfortunately has relatively expensive computation cost. It should be noted that this computational advantage of AdaGCN is more significant when it comes to large datasets, e.g., Reddit. Table 4 demonstrates AdaGCN has the potential to perform much faster on larger datasets.
Besides, we explore the computational cost of ReLU and sparse adjacency tensor with respect to the number of layers in the right part of Figure 4. We focus on comparing AdaGCN with SGC and GCN as other GCN-based methods, such as GraphSAGE and APPNP, behave similarly with GCN. Particularly, we can easily observe that both SGC (green line) and GCN (red line) show a linear increasing tendency and GCN yields a larger slope arises from ReLU and more parameters. For SGC, stacking more layers directly is undesirable regarding the computation. Thus, a limited number of SGC layers is preferable with more advanced optimization techniques Wu et al. (2019). It also shows that the computational cost involved sparse matrices in neural networks plays a dominant role in all the cost especially when the layer is large enough. In contrast, our AdaGCN (pink line) displays an almost constant trend as the layer increases simply because it excludes the extra computation involved in sparse tensors Â, such as · · · Â ReLU(ÂXW (0))W (1) · · · , in the process of training neural networks. AdaGCN maintains the updating of parameters in the f (l)θ with a fixed architecture in each layer while the layer-wise optimization, therefore displaying a nearly constant computation cost within each epoch although more epochs are normally needed in the entire layer-wise training. We leave the analysis of exact time and memory complexity of AdaGCN as future works, but boosting-based algorithms including AdaGCN is memory-efficient (Oono & Suzuki, 2020).
5 DISCUSSIONS AND CONCLUSION
One potential concern is that AdaBoost (Hastie et al., 2009; Freund et al., 1999) is established on i.i.d. hypothesis while graphs have inherent data-dependent property. Fortunately, the statistical convergence and consistency of boosting (Lugosi & Vayatis, 2001; Mannor et al., 2003) can still be preserved when the samples are weakly dependent (Lozano et al., 2013). More discussion can refer to Appendix A.5. In this paper, we propose a novel RNN-like deep graph neural network architecture called AdaGCNs. With the delicate architecture design, our approach AdaGCN can effectively explore and exploit knowledge from different orders of neighbors in an Adaboost way. Our work paves a way towards better combining different-order neighbors to design deep graph models rather than only stacking on specific type of graph convolution.
ACKNOWLEDGMENTS
Z. Lin is supported by NSF China (grant no.s 61625301 and 61731018), Major Scientific Research Project of Zhejiang Lab (grant no.s 2019KB0AC01 and 2019KB0AB02), Beijing Academy of Artificial Intelligence, and Qualcomm.
A APPENDIX
A.1 RELATED WORKS ON DEEP GRAPH MODELS
A straightforward solution (Kipf & Welling, 2017; Xu et al., 2018b) inspired by ResNets (He et al., 2016) was by adding residual connections, but this practice was unsatisfactory both in prediction performance and computational efficiency towards building deep graph models, as shown in our experiments in Section 4.1 and 4.3. More recently, JK (Jumping Knowledge Networks (Xu et al., 2018b)) introduced jumping connections into final aggregation mechanism in order to extract knowledge from different layers of graph convolutions. However, this straightforward change of GCN architecture exhibited inconsistent empirical performance for different aggregation operators, which cannot demonstrate the successful construction of deep layers. In addition, Graph powering-based method (Jin et al., 2019) implicitly leveraged more spatial information by extending classical spectral graph theory to robust graph theory, but they concentrated on defending adversarial attacks rather than model depth. LanczosNet (Liao et al., 2019) utilized Lanczos algorithm to construct low rank approximations of the graph Laplacian and then can exploit multi-scale information. Moreover, APPNP (Approximate Personalized Propagation of Neural Predictions, (Klicpera et al., 2018)) leveraged the relationship between GCN and personalized PageRank to derive an improved global propagation scheme. Beyond these, DeepGCNs (Li et al., 2019) directly adapted residual, dense connection and dilated convolutions to GCN architecture, but it mainly focused on the task of point cloud semantic segmentation and has not demonstrated its effectiveness in typical graph tasks. Similar to our work, Deep Adaptive Graph Neural Network (DAGNN) (Liu et al., 2020) also focused on incorporating information from large receptive fields through the entanglement of representation transformation and propagation, while our work efficiently ensembles knowledge from large receptive fields in an Adaboost manner. Other related works based on global attention models (Puny et al., 2020) and sample-based methods (Zeng et al., 2019) are also helpful to construct deep graph models.
A.2 INSUFFICIENT REPRESENTATION POWER OF ADASGC
As illustrated in Figure 5, with the increasing of layers, AdaSGC with only linear transformation has insufficient representation power both in extracting knowledge from high-order neighbors and combining information from different orders of neighbors while AdaGCN exhibits a consistent improvement of performance as the layer increases.
A.3 PROOF OF PROPOSITION 1
Firstly, we further elaborate the Proposition 1 as follows, then we provide the proof.
Suppose that γ is the teleport factor. Consider the output ZPPNP = γ(I − (1 − γ)Â)−1fθ(X) in PPNP and ZAPPNP from its approxminated version APPNP. Let matrix sequence {Z(l)} be from the output of each layer l in AdaGCN, then PPNP is equivalent to the Exponential Moving Average (EMA) with exponentially decreasing factor γ, a first-order infinite impulse response filter, on {Z(l)} in a sharing parameters version, i.e., f (l)θ ≡ fθ . In addition, APPNP, which we reformulate in Eq. 10, can be viewed as the approximated form of EMA with a
limited number of terms.
ZAPPNP = (γ L−1∑ l=0 (1− γ)lÂl + (1− γ)LÂL)fθ(X) (10)
Proof. According to Neumann Theorem, ZPPNP can be expanded as a Neumann series:
ZPPNP = γ(I− (1− γ)Â)−1fθ(X)
= γ ∞∑ l=0 (1− γ)lÂlfθ(X),
where feature embedding matrix sequence {Z(l)} for each order of neighbors share the same parameters fθ . If we relax this sharing nature to the adaptive form with respect to the layer and put Âl into fθ , then the output Z can be approximately formulated as:
ZPPNP ≈ γ ∞∑ l=0 (1− γ)lf (l)θ (Â lX)
This relaxed version from PPNP is the Exponential Moving Average form of matrix sequence {Z(l)} with exponential decreasing factor γ. Moreover, if we approximate the EMA by truncating it after L− 1 items, then the weight omitted by stopping after L − 1 items is (1 − γ)L. Thus, the approximated EMA is exactly the APPNP form:
ZAPPNP = (γ L−1∑ l=0 (1− γ)lÂl + (1− γ)LÂL)fθ(X)
A.4 PROOF OF PROPOSITION 2
Proof. We consider a two layers fully-connected neural network as f in Eq. 8, then the output of AdaGCN can be formulated as:
Z = L∑ l=0 α(l)σ(ÂlXW (0))W (1)
Particularly, we set W (0) = bl sign(bl)α(l) I and W (1) = sign(bl)I where sign(bl) is the signed incidence scalar w.r.t bl. Then the output of AdaGCN can be presented as:
Z = L∑ l=0 α(l)σ(ÂlX bl sign(bl)α(l) I)sign(bl)I
= L∑ l=0 α(l)σ(ÂlX) bl sign(bl)α(l) sign(bl)
= L∑ l=0 blσ ( ÂlX ) The proof that GCNs-based methods are not capable of representing general layer-wise neighborhood mixing has been demonstrated in MixHop (Abu-El-Haija et al., 2019). Proposition 2 proved.
A.5 EXPLANATION ABOUT CONSISTENCY OF BOOSTING ON DEPENDENT DATA
Definition 2. (β-mixing sequences.) Let σji = σ(W ) = σ(Wi,Wi+1, ...,Wj) be the σ-field generated by a strictly stationary sequence of random variablesW = (Wi,Wi+1, ...,Wj). The β-mixing coefficient is defined by:
βW (n) = sup k
E sup {∣∣∣P(A|σk1)− P(A)∣∣∣ : A ∈ σ∞k+n}
Then a sequence W is called β-mixing if limn→∞βW (n) = 0. Further, it is algebraically β-mixing if there is a positive constant rβ such that βW (n) = O(n−rβ ). Definition 3. (Consistency) A classification rule is consistent for a certain distribution P if E(L(hn)) = P{hn(X) = Y } → a as n → ∞ where a is a constant. It is strongly Bayes-risk consistent if limn→∞L(hn) = a almost surely.
Under these definitions, the convergence and consistence of regularized boosting method on stationary βmixing sequences can be proved under mild assumptions. More details can be referred from (Lozano et al., 2013).
A.6 EXPERIMENTAL DETAILS
Early Stopping on AdaGCN. We apply the same early stopping mechanism across all the methods as (Klicpera et al., 2018) for fair comparison. Furthermore, boosting theory also has the capacity to perfectly incorporate early stopping and it has been shown that for several boosting algorithms including AdaBoost, this regularization via early stopping can provide guarantees of consistency (Zhang et al., 2005; Jiang et al., 2004; Bühlmann & Yu, 2003).
Dataset Splitting. We choose a training set of a fixed nodes per class, an early stopping set of 500 nodes and test set of remained nodes. Each experiment is run with 5 random initialization on each data split, leading to a total of 100 runs per experiment. On a standard setting, we randomly select 20 nodes per class. For the two different label rates on each graph, we select 6, 11 nodes per class on citeseer, 8, 16 nodes per class on Cora-ML, 7, 14 nodes per class on Pubmed and 8, 15 nodes per class on MS-Academic dataset.
Model parameters. For all GCN-based approaches, we use the same hyper-parameters in the original paper: learning rate of 0.01, 0.5 dropout rate, 5× 10−4 L2 regularization weight, and 16 hidden units. For FastGCN, we adopt the officially released code to conduct our experiments. PPNP and APPNP are adapted with best setting: K = 10 power iteration steps for APPNP, teleport probability γ = 0.1 on Cora-ML, Citeseer and Pubmed, γ = 0.2 on Ms-Academic. In addition, we use two layers with h = 64 hidden units and apply L2 regularization with λ = 5 × 10−3 on the weights of the first layer and use dropout with dropout rate d = 0.5 on both layers and the adjacency matrix. The early stopping criterion uses a patience of p = 100 and an (unreachably high) maximum of n = 10000 epochs.The implementation of AdaGCN is adapted from PPNP and APPNP. Corresponding patience p = 300 and n = 500 in the early stopping of AdaGCN. Moreover, SGC is re-implemented in a straightforward way without incorporating advanced optimization for better illustration and comparison. Other baselines are adopted the same parameters described in PPNP and APPNP.
Settings on Reddit dataset. By repeatedly tuning the parameters of these typical methods on Reddit, we finally choose weight decay rate as 10−4, hidden layer size 100 and epoch 20000 for AdaGCN. For APPNP, we opt weight decay rate as 10−5, dropout rate as 0 and epoch 500. V.GCN applies the same parameters in (Kipf & Welling, 2017) and we choose epoch as 500. All approaches have not deployed early stopping due to the expensive computational cost on the large Reddit dataset, which is also a fair comparison.
A.7 CHOICE OF THE NUMBER OF LAYERS
Different from the “forcible” behaviors in CNNs that directly stack many convolution layers, in our AdaGCN there is a theoretical guidance on the choice of model depth L, i.e., the number of base classifiers or layers, derived from boosting theory. Specifically, according to the boosting theory, the increasing of L can exponentially decreases the empirical loss, however, from the perspective of VC-dimension, an overly large L can yield overfitting of AdaGCN. It should be noted that the deeper graph convolution layers in AdaGCN are not always better, which indeed heavily depends on the the complexity of data. In practice, L can be determined via cross-validation. Specifically, we start a VC-dimension-based analysis to illustrate that too large L can yield overfitting of AdaGCN. For L layers of AdaGCN, its hypothesis set is
FL = { argmax
k ( L∑ l=1 α(l)f (l) θ ) : α(l) ∈ R, l ∈ [1, L] } (11)
Then the VC-dimension of FT can be bounded as follows in terms of the VC-dimension d of the family of base hypothesis: VCdim (FL) ≤ 2(d+ 1)(L+ 1) log2((L+ 1)e), (12) where e is a constant and the upper bounds grows as L increases. Combined with VC-dimension generalization bounds, these results imply that larger values of L can lead to overfitting of AdaBoost. This situation also happens in AdaGCN, which inspires us that there is no need to stack too many layers on AdaGCN in order to avoid overfitting. In practice, L is typically determined via cross-validation. | 1. What are the strengths and weaknesses of the proposed AdaGCN model?
2. How does the reviewer assess the novelty and effectiveness of the proposed approach?
3. Are there any concerns regarding the computational cost and robustness to noisy nodes?
4. Do the experimental results convincingly demonstrate the benefits of AdaGCN?
5. Are there any inconsistencies or unclear aspects in the presentation of the results? | Review | Review
Overall, the proposed AdaGCN model could incorporate the different hops of neighbors into the network in an Adaboost way without improving the computational cost. That has been confirmed by the theoretical comparison with other baselines and the experimental results.
Pros: [1] It proposed a novel deep graph neural network by incorporating AdaBoost into the computation of network. [2] It compared to the existing related work to illustrate the benefits of the proposed AdaGCN. [3] The experiments demonstrate its effectiveness of efficiency of AdaGCN on encoding the high-order graph structure information.
Cons: [1] The simplified graph convolution might be vulnerable to the noisy nodes. That is, when there exists one noisy node with abnormal attributes, this simplified graph convolution might be significantly degraded. Thus, the higher-order convolution in the proposed method might become worse. [2] In table 2, it is confusing why the implemented methods (e.g., PPNP (ours), APPNP (ours)) have lower performance than the results reported in the literature (Klicpera et al., 2018). [3] In Section 4.3, it shows that Fast-GCN has fastest speed in Pubmed, but slower than GCN on Cora-ML and MS-Academic datasets. That might need more explanation since intuitively the goal of Fast-GCN is to improve the efficiency of GCN. |
ICLR | Title
AdaGCN: Adaboosting Graph Convolutional Networks into Deep Models
Abstract
The design of deep graph models still remains to be investigated and the crucial part is how to explore and exploit the knowledge from different hops of neighbors in an efficient way. In this paper, we propose a novel RNN-like deep graph neural network architecture by incorporating AdaBoost into the computation of network; and the proposed graph convolutional network called AdaGCN (Adaboosting Graph Convolutional Network) has the ability to efficiently extract knowledge from high-order neighbors of current nodes and then integrates knowledge from different hops of neighbors into the network in an Adaboost way. Different from other graph neural networks that directly stack many graph convolution layers, AdaGCN shares the same base neural network architecture among all “layers” and is recursively optimized, which is similar to an RNN. Besides, We also theoretically established the connection between AdaGCN and existing graph convolutional methods, presenting the benefits of our proposal. Finally, extensive experiments demonstrate the consistent state-of-the-art prediction performance on graphs across different label rates and the computational advantage of our approach AdaGCN 1.
1 INTRODUCTION
Recently, research related to learning on graph structural data has gained considerable attention in machine learning community. Graph neural networks (Gori et al., 2005; Hamilton et al., 2017; Veličković et al., 2018), particularly graph convolutional networks (Kipf & Welling, 2017; Defferrard et al., 2016; Bruna et al., 2014) have demonstrated their remarkable ability on node classification (Kipf & Welling, 2017), link prediction (Zhu et al., 2016) and clustering tasks (Fortunato, 2010). Despite their enormous success, almost all of these models have shallow model architectures with only two or three layers. The shallow design of GCN appears counterintuitive as deep versions of these models, in principle, have access to more information, but perform worse. Oversmoothing (Li et al., 2018) has been proposed to explain why deep GCN fails, showing that by repeatedly applying Laplacian smoothing, GCN may mix the node features from different clusters and makes them indistinguishable. This also indicates that by stacking too many graph convolutional layers, the embedding of each node in GCN is inclined to converge to certain value (Li et al., 2018), making it harder for classification. These shallow model architectures restricted by oversmoothing issue ∗Corresponding author. 1Code is available at https://github.com/datake/AdaGCN.
limit their ability to extract the knowledge from high-order neighbors, i.e., features from remote hops of neighbors for current nodes. Therefore, it is crucial to design deep graph models such that high-order information can be aggregated in an effective way for better predictions.
There are some works (Xu et al., 2018b; Liao et al., 2019; Klicpera et al., 2018; Li et al., 2019; Liu et al., 2020) that tried to address this issue partially, and the discussion can refer to Appendix A.1. By contrast, we argue that a key direction of constructing deep graph models lies in the efficient exploration and effective combination of information from different orders of neighbors. Due to the apparent sequential relationship between different orders of neighbors, it is a natural choice to incorporate boosting algorithm into the design of deep graph models. As an important realization of boosting theory, AdaBoost (Freund et al., 1999) is extremely easy to implement and keeps competitive in terms of both practical performance and computational cost (Hastie et al., 2009). Moreover, boosting theory has been used to analyze the success of ResNets in computer vision (Huang et al., 2018) and AdaGAN (Tolstikhin et al., 2017) has already successfully incorporated boosting algorithm into the training of GAN (Goodfellow et al., 2014).
In this work, we focus on incorporating AdaBoost into the design of deep graph convolutional networks in a non-trivial way. Firstly, in pursuit of the introduction of AdaBoost framework, we refine the type of graph convolutions and thus obtain a novel RNN-like GCN architecture called AdaGCN. Our approach can efficiently extract knowledge from different orders of neighbors and then combine these information in an AdaBoost manner with iterative updating of the node weights. Also, we compare our AdaGCN with existing methods from the perspective of both architectural difference and feature representation power to show the benefits of our method. Finally, we conduct extensive experiments to demonstrate the consistent state-of-the-art performance of our approach across different label rates and computational advantage over other alternatives.
2 OUR APPROACH: ADAGCN
2.1 ESTABLISHMENT OF ADAGCN
Consider an undirected graph G = (V, E) with N nodes vi ∈ V , edges (vi, vj) ∈ E . A ∈ RN×N is the adjacency matrix with corresponding degree matrix Dii = ∑ j Aij . In the vanilla GCN model (Kipf & Welling, 2017) for semi-supervised node classification, the graph embedding of nodes with two convolutional layers is formulated as:
Z = Â ReLU(ÂXW (0))W (1) (1)
where Z ∈ RN×K is the final embedding matrix (output logits) of nodes before softmax and K is the number of classes. X ∈ RN×C denotes the feature matrix where C is the input dimension.  = D̃− 1 2 ÃD̃− 1 2 where à = A + I and D̃ is the degree matrix of Ã. In addition, W (0) ∈ RC×H is the input-to-hidden weight matrix for a hidden layer with H feature maps and W (1) ∈ RH×K is the hidden-to-output weight matrix.
Our key motivation of constructing deep graph models is to efficiently explore information of highorder neighbors and then combine these messages from different orders of neighbors in an AdaBoost way. Nevertheless, if we naively extract information from high-order neighbors based on GCN, we are faced with stacking l layers’ parameter matrix W (i), i = 0, ..., l − 1, which is definitely costly in computation. Besides, Multi-Scale Deep Graph Convolutional Networks (Luan et al., 2019) also theoretically demonstrated that the output can only contain the stationary information of graph structure and loses all the local information in nodes for being smoothed if we simply deepen GCN. Intuitively, the desirable representation of node features does not necessarily need too many nonlinear transformation f applied on them. This is simply due to the fact that the feature of each node is normally one-dimensional sparse vector rather than multi-dimensional data structures, e.g., images, that intuitively need deep convolution network to extract high-level representation for vision tasks. This insight has been empirically demonstrated in many recent works (Wu et al., 2019; Klicpera et al., 2018; Xu et al., 2018a), showing that a two-layer fully-connected neural networks is a better choice in the implementation. Similarly, our AdaGCN also follows this direction by choosing an appropriate f in each layer rather than directly deepen GCN layers.
Thus, we propose to remove ReLU to avoid the expensive joint optimization of multiple parameter matrices. Similarly, Simplified Graph Convolution (SGC) (Wu et al., 2019) also adopted this prac-
tice, arguing that nonlinearity between GCN layers is not crucial and the majority of the benefits arises from local weighting of neighboring features. Then the simplified graph convolution is:
Z = ÂlXW (0)W (1) · · ·W (l−1) = ÂlXW̃, (2)
where we collapse W (0)W (1) · · ·W (l−1) as W̃ and Âl denotes  to the l-th power. In particular, one crucial impact of ReLU in GCN is to accelerate the convergence of matrix multiplication since the ReLU is a contraction mapping intuitively. Thus, the removal of ReLU operation could also alleviate the oversmoothing issue, i.e. slowering the convergence of node embedding to indistinguishable ones (Li et al., 2018). Additionally, without ReLU this simplified graph convolution is also able to avoid the aforementioned joint optimization over multiple parameter matrices, resulting in computational benefits. Nevertheless, we find that this type of stacked linear transformation from graph convolution has insufficient power in representing information of high-order neighbors, which is revealed in our experiment described in Appendix A.2. Therefore, we propose to utilize an appropriate nonlinear function fθ, e.g., a two-layer fully-connected neural network, to replace the linear transformation W̃ in Eq. 2 and enhance the representation ability of each base classifier in AdaGCN as follows:
Z(l) = fθ(Â lX), (3)
where Z(l) represents the final embedding matrix (output logits before Softmax) after the l-th base classifier in AdaGCN. This formulation also implies that the l-th base classifier in AdaGCN is extracting knowledge from features of current nodes and their l-th hop of neighbors. Due to the fact that the function of l-th base classifier in AdaGCN is similar to that of the l-th layer in other traditional GCN-based methods that directly stack many graph convolutional layers, we regard the whole part of l-th base classifier as the l-th layers in AdaGCN. As for the realization of Multi-class AdaBoost, we apply SAMME (Stagewise Additive Modeling using a Multi-class Exponential loss function) algorithm (Hastie et al., 2009), a natural and clean multi-class extension of the two-class AdaBoost adaptively combining weak classifiers.
As illustrated in Figure 1, we apply base classifier f (l)θ to extract knowledge from current node feature and l-th hop of neighbors by minimizing current weighted loss. Then we directly compute the weighted error rate err(l) and corresponding weight α(l) of current base classifier f (l)θ as follows:
err(l) = n∑ i=1 wiI ( ci 6= f (l)θ (xi) ) / n∑ i=1 wi
α(l) = log 1− err(l)
err(l) + log(K − 1),
(4)
where wi denotes the weight of i-th node and ci represents the category of current i-th node. To attain a positive α(l), we only need (1 − err(l)) > 1/K, i.e., the accuracy of each weak classifier
should be better than random guess (Hastie et al., 2009). This can be met easily to guarantee the weights to be updated in the right direction. Then we adjust nodes’ weights by increasing weights on incorrectly classified ones:
wi ← wi · exp ( α(l) · I ( ci 6= f (l)θ (xi) )) , i = 1, . . . , n (5)
After re-normalizing the weights, we then compute Âl+1X = Â · (ÂlX) to sequentially extract knowledge from l+1-th hop of neighbors in the following base classifier f (l+1)θ . One crucial point of AdaGCN is that different from traditional AdaBoost, we only define one fθ, e.g. a two-layer fully connected neural network, which in practice is recursively optimized in each base classifier just similar to a recurrent neural network. This also indicates that the parameters from last base classifier are leveraged as the initialization of next base classifier, which coincides with our intuition that l+1-th hop of neighbors are directly connected from l-th hop of neighbors. The efficacy of this kind of layer-wise training has been similarly verified in (Belilovsky et al., 2018) recently. Further, we combine the predictions from different orders of neighbors in an Adaboost way to obtain the final prediction C(A,X):
C(A,X) = argmax k L∑ l=0 α(l)f (l) θ (Â lX) (6)
Finally, we obtain the concise form of AdaGCN in the following:
ÂlX = Â · (Âl−1X)
Z(l) = f (l) θ (Â lX)
Z = AdaBoost(Z(l))
(7)
Note that fθ is non-linear, rather than linear in SGC (Wu et al., 2019), to guarantee the representation power. As shown in Figure 1, the architecture of AdaGCN is a variant of RNN with synchronous sequence input and output. Although the same classifier architecture is adopted for f (l)θ , their parameters are different, which is different from vanilla RNN. We provide a detailed description of the our algorithm in Section 3.
2.2 COMPARISON WITH EXISTING METHODS
Architectural Difference. As illustrated in Figure 1 and 2, there is an apparent difference among the architectures of GCN (Kipf & Welling, 2017), SGC (Wu et al., 2019), Jumping Knowledge (JK) (Xu et al., 2018b) and AdaGCN. Compared with these existing graph convolutional approaches that sequentially convey intermediate result Z(l) to compute final prediction, our AdaGCN transmits weights of nodes wi, aggregated features of different hops of neighbors ÂlX . More importantly, in AdaGCN the embedding Z(l) is independent of the flow of computation in the network and the sparse adjacent matrix  is also not directly involved in the computation of individual network because we compute
Â(l+1)X in advance and then feed it instead of  into the classifier f (l+1)θ , thus yielding significant computation reduction, which will be discussed further in Section 3.
Connection with PPNP and APPNP. We also established a strong connection between AdaGCN and previous state-of-the-art Personalized Propagation of Neural Predictions (PPNP) and Approximate PPNP (APPNP) (Klicpera et al., 2018) method that leverages personalized pagerank to reconstruct graph convolutions in order to use information from a large and adjustable neighborhood. The analysis can be summarized in the following Proposition 1. Proof can refer to Appendix A.3.
Proposition 1. Suppose that γ is the teleport factor. Let matrix sequence {Z(l)} be from the output of each layer l in AdaGCN, then PPNP is equivalent to the Exponential Moving Average (EMA) with exponentially decreasing factor γ on {Z(l)} in a sharing parameters version, and its approximate version APPNP can be viewed as the approximated form of EMA with a limited number of terms.
Proposition 1 illustrates that AdaGCN can be viewed as an adaptive form of APPNP, formulated as:
Z = L∑ l=0 α(l)f (l) θ (Â lX) (8)
Specifically, the first discrepancy between AdaGCN and APPNP lies in the adaptive coefficient α(l) in AdaGCN determined by the error of l-th base classifier f (l)θ rather than fixed exponentially decreased weights in APPNP. In addition, AdaGCN employs classifier f (l)θ with different parameters to learn the embedding of different orders of neighbors, while APPNP shares these parameters in its form. We verified this benefit of our approach in our experiments shown in Section 4.2.
Comparison with MixHop MixHop (Abu-El-Haija et al., 2019) applied the similar way of graph convolution by repeatedly mixing feature representations of neighbors at various distance. Proposition 2 proves that both AdaGCN and MixHop are able to represent feature differences among neighbors while previous GCNs-based methods cannot. Proof can refer to Appendix A.4. Recap the definition of general layer-wise Neighborhood Mixing (Abu-El-Haija et al., 2019) as follows: Definition 1. General layer-wise Neighborhood Mixing: A graph convolution network has the ability to represent the layer-wise neighborhood mixing if for any b0, b1, ..., bL, there exists an injective mapping f with a setting of its parameters, such that the output of this graph convolution network can express the following formula:
f ( L∑ l=0 blσ ( ÂlX )) (9)
Proposition 2. AdaGCNs defined by our proposed approach (Eq. equation 7) are capable of representing general layer-wise neighborhood mixing, i.e., can meet the Definition 1.
Albeit the similarity, AdaGCN distinguishes from MixHop in many aspects. Firstly, MixHop concatenates all outputs from each order of neighbors while we combines these predictions in an Adaboost way, which has theoretical generalization guarantee based on boosting theory Hastie et al. (2009). Oono & Suzuki (2020) have recently derived the optimization and generalization guarantees of multi-scale GNNs, serving as the theoretical backbone of AdaGCN. Meantime, MixHop allows full linear mixing of different orders of neighboring features, while AdaGCN utilizes different nonlinear transformation f (l)θ among all layers, enjoying stronger expressive power.
3 ALGORITHM
In practice, we employ SAMME.R (Hastie et al., 2009), the soft version of SAMME, in AdaGCN. SAMME.R (R for Real) algorithm (Hastie et al., 2009) leverages real-valued confidence-rated predictions, i.e., weighted probability estimates, rather than predicted hard labels in SAMME, in the prediction combination, which has demonstrated a better generalization and faster convergence than SAMME. We elaborate the final version of AdaGCN in Algorithm 1. We provide the analysis on the choice of model depth L in Appendix A.7, and then we elaborate the computational advantage of AdaGCN in the following.
Analysis of Computational Advantage. Due to the similarity of graph convolution in MixHop (Abu-El-Haija et al., 2019), AdaGCN also requires no additional memory or computational complexity compared with previous GCN models. Meanwhile, our approach enjoys huge computational advantage compared with GCN-based models, e.g., PPNP and APPNP, stemming from excluding the additional computation involved in sparse tensors, such as the sparse tensor multiplication between  and other dense tensors, in the forward and backward propagation of the neural network. Specifically, there are only L times sparse tensor operations for an AdaGCN model with L layers, i.e., ÂlX =  · (Âl−1X) for each layer l. This operation in each layer yields a dense tensor
Algorithm 1 AdaGCN based on SAMME.R Algorithm Input: Features Matrix X , normalized adjacent matrix Â, a two-layer fully connected network fθ, number of layers L and number of classes K. Output: Final combined prediction C(A,X).
1: Initialize the node weights wi = 1/n, i = 1, 2, ..., n on training set, neighbors feature matrix X̂(0) = X and classifier f (−1)θ . 2: for l = 0 to L do 3: Fit the graph convolutional classifier f (l)θ on neighbor feature matrix X̂
(l) based on f (l−1)θ by minimizing current weighted loss.
4: Obtain the weighted probability estimates p(l)(X̂(l)) for f (l)θ : p (l) k (X̂ (l)) = Softmax(f (l)θ (c = k|X̂ (l))), k = 1, . . . ,K 5: Compute the individual prediction h(l)k (x) for the current graph convolutional classifier f (l) θ :
h (l) k (X̂ (l))← (K − 1)
( log p
(l) k (X̂ (l))− 1 K ∑ k′ log p (l) k′ (X̂ (l)) ) where k = 1, . . . ,K.
6: Adjust the node weights wi for each node xi with label yi on training set: wi ← wi · exp ( −K − 1
K y>i log p (l) (xi)
) , i = 1, . . . , n
7: Re-normalize all weights wi. 8: Update l+1-hop neighbor feature matrix X̂(l+1): X̂(l+1) = ÂX̂(l) 9: end for
10: Combine all predictions h(l)k (X̂ (l)) for l = 0, ..., L.
C(A,X) = argmax k L∑ l=0 h (l) k (X̂ (l))
11: return Final combined prediction C(A,X).
Bl = ÂlX for the l-th layer, which is then fed into the computation in a two-layer fully-connected network, i.e., f (l)θ (B
l) = ReLU(BlW (0))W (1). Due to the fact that dense tensor Bl has been computed in advance, there is no other computation related to sparse tensors in the multiple forward and backward propagation procedures while training the neural network. By contrast, this multiple computation involved in sparse tensors in the GCN-based models, e.g., GCN: Â ReLU(ÂXW (0))W (1), is highly expensive. AdaGCN avoids these additional sparse tensor operations in the neural network and then attains huge computational efficiency. We demonstrate this viewpoint in the Section 4.3.
4 EXPERIMENTS
Experimental Setup. We select five commonly used graphs: CiteSeer, Cora-ML (Bojchevski & Günnemann, 2018; McCallum et al., 2000), PubMed (Sen et al., 2008), MS-Academic (Shchur et al., 2018) and Reddit. Dateset statistics are summarized in Table 1. Recent graph neural networks suffer from overfitting to a single splitting of training, validation and test datasets (Klicpera et al., 2018). To address this problem, inspired by (Klicpera et al., 2018), we test all approaches on multiple random splits and initialization to conduct a rigorous study. Detailed dataset splittings are provided in Appendix A.6.
Citeseer
Basic Setting of Baselines and AdaGCN. We compare AdaGCN with GCN (Kipf & Welling, 2017) and Simple Graph Convolution (SGC) (Wu et al., 2019) in Figure 3. In Table 2, we employ the same baselines as (Klicpera et al., 2018): V.GCN (vanilla GCN) (Kipf & Welling, 2017) and GCN with our early stopping, N-GCN (network of GCN) (Abu-El-Haija et al., 2018a), GAT (Graph Attention Networks) (Veličković et al., 2018), BT.FP (bootstrapped feature propagation) (Buchnik & Cohen, 2018) and JK (jumping knowledge networks with concatenation) (Xu et al., 2018b). In the computation part, we additionally compare AdaGCN with FastGCN (Chen et al., 2018) and GraphSAGE (Hamilton et al., 2017). We refer to the result of baselines from (Klicpera et al., 2018) and the implementation of AdaGCN is adapted from APPNP. For AdaGCN, after the line search on hyper-parameters, we set h = 5000 hidden units for the first four datasets except Ms-academic with h = 3000, and 15, 12, 20 and 5 layers respectively due to the different graph structures. In addition, we set dropout rate to 0 for Citeseer and Cora-ML datasets and 0.2 for the other datasets and 5×10−3L2 regularization on the first linear layer. We set weight decay as 1×10−3 for Citeseer while 1 × 10−4 for others. More detailed model parameters and analysis about our early stopping mechanism can be referred from Appendix A.6.
4.1 DESIGN OF DEEP GRAPH MODELS TO CIRCUMVENT OVERSMOOTHING EFFECT
It is well-known that GCN suffers from oversmoothing (Li et al., 2018) with the stacking of more graph convolutions. However, combination of knowledge from each layer to design deep graph
models is a reasonable method to circumvent oversmoothing issue. In our experiment, we aim to explore the prediction performance of GCN, GCN with residual connection (Kipf & Welling, 2017), SGC and our AdaGCN with a growing number of layers.
From Figure 3, it can be easily observed that oversmoothing leads to the rapid decreasing of accuracy for GCN (blue line) as the layer increases. In contrast, the speed of smoothing (green line) of SGC is much slower than GCN due to the lack of ReLU analyzed in Section 2.1. Similarly, GCN with residual connection (yellow line) partially mitigates the oversmoothing effect of original GCN but fails to take advantage of information from different orders of neighbors to improve the prediction performance constantly. Remarkably, AdaGCN (red line) is able to consistently enhance the performance with the increasing of layers across the three datasets. This implies that AdaGCN can efficiently incorporate knowledge from different orders of neighbors and circumvent oversmoothing of original GCN in the process of constructing deep graph models. In addition, the fluctuation of performance for AdaGCN is much lower than GCN especially when the number of layer is large.
4.2 PREDICTION PERFORMANCE
We conduct a rigorous study of AdaGCN on four datasets under multiple splittings of dataset. The results from Table 2 suggest the state-of-the-art performance of our approach and the improvement compared with APPNP validates the benefit of adaptive form for our AdaGCN. More rigorously, p values under paired t test demonstrate the significance of improvement for our method.
In the realistic setting, graphs usually have different labeled nodes and thus it is necessary to investigate the robust performance of methods on different number of labeled nodes. Here we utilize label rates to measure the different numbers of labeled nodes and then sample corresponding labeled nodes per class on graphs respectively. Table 3 presents the consistent state-of-the-art performance of AdaGCN under different label rates. An interesting manifestation from Table 3 is that AdaGCN yields more improvement on fewer label rates compared with APPNP, showing more efficiency on graphs with few labeled nodes. Inspired by the Layer Effect on graphs (Sun et al., 2019), we argue that the increase of layers in AdaGCN can result in more benefits on the efficient propagation of label signals especially on graphs with limited labeled nodes.
More rigorously, we additionally conduct the comparison on a larger dataset, i.e., Reddit. We choose the best layer as 4 due to the fact that AdaGCN with larger number of layers tends to suffer from overfitting on this relatively simple dataset (with high label rate 65.9%). Table 4 suggests that AdaGCN can still outperform other typical baselines, including V.GCN, PPNP and APPNP. More experimental details can be referred from Appendix A.6.
4.3 COMPUTATIONAL EFFICIENCY
Without the additional computational cost involved in sparse tensors in the propagation of the neural network, AdaGCN presents huge computational efficiency. From the left part of Figure 4, it exhibits that AdaGCN has the fastest speed of per-epoch training time in comparison with other methods except the comparative performance with FastGCN in Pubmed. In addition, there is a somewhat inconsistency in computation of FastGCN, with fastest speed in Pubmed but slower than
GCN on Cora-ML and MS-Academic datasets. Furthermore, with multiple power iterations involved in sparse tensors, APPNP unfortunately has relatively expensive computation cost. It should be noted that this computational advantage of AdaGCN is more significant when it comes to large datasets, e.g., Reddit. Table 4 demonstrates AdaGCN has the potential to perform much faster on larger datasets.
Besides, we explore the computational cost of ReLU and sparse adjacency tensor with respect to the number of layers in the right part of Figure 4. We focus on comparing AdaGCN with SGC and GCN as other GCN-based methods, such as GraphSAGE and APPNP, behave similarly with GCN. Particularly, we can easily observe that both SGC (green line) and GCN (red line) show a linear increasing tendency and GCN yields a larger slope arises from ReLU and more parameters. For SGC, stacking more layers directly is undesirable regarding the computation. Thus, a limited number of SGC layers is preferable with more advanced optimization techniques Wu et al. (2019). It also shows that the computational cost involved sparse matrices in neural networks plays a dominant role in all the cost especially when the layer is large enough. In contrast, our AdaGCN (pink line) displays an almost constant trend as the layer increases simply because it excludes the extra computation involved in sparse tensors Â, such as · · · Â ReLU(ÂXW (0))W (1) · · · , in the process of training neural networks. AdaGCN maintains the updating of parameters in the f (l)θ with a fixed architecture in each layer while the layer-wise optimization, therefore displaying a nearly constant computation cost within each epoch although more epochs are normally needed in the entire layer-wise training. We leave the analysis of exact time and memory complexity of AdaGCN as future works, but boosting-based algorithms including AdaGCN is memory-efficient (Oono & Suzuki, 2020).
5 DISCUSSIONS AND CONCLUSION
One potential concern is that AdaBoost (Hastie et al., 2009; Freund et al., 1999) is established on i.i.d. hypothesis while graphs have inherent data-dependent property. Fortunately, the statistical convergence and consistency of boosting (Lugosi & Vayatis, 2001; Mannor et al., 2003) can still be preserved when the samples are weakly dependent (Lozano et al., 2013). More discussion can refer to Appendix A.5. In this paper, we propose a novel RNN-like deep graph neural network architecture called AdaGCNs. With the delicate architecture design, our approach AdaGCN can effectively explore and exploit knowledge from different orders of neighbors in an Adaboost way. Our work paves a way towards better combining different-order neighbors to design deep graph models rather than only stacking on specific type of graph convolution.
ACKNOWLEDGMENTS
Z. Lin is supported by NSF China (grant no.s 61625301 and 61731018), Major Scientific Research Project of Zhejiang Lab (grant no.s 2019KB0AC01 and 2019KB0AB02), Beijing Academy of Artificial Intelligence, and Qualcomm.
A APPENDIX
A.1 RELATED WORKS ON DEEP GRAPH MODELS
A straightforward solution (Kipf & Welling, 2017; Xu et al., 2018b) inspired by ResNets (He et al., 2016) was by adding residual connections, but this practice was unsatisfactory both in prediction performance and computational efficiency towards building deep graph models, as shown in our experiments in Section 4.1 and 4.3. More recently, JK (Jumping Knowledge Networks (Xu et al., 2018b)) introduced jumping connections into final aggregation mechanism in order to extract knowledge from different layers of graph convolutions. However, this straightforward change of GCN architecture exhibited inconsistent empirical performance for different aggregation operators, which cannot demonstrate the successful construction of deep layers. In addition, Graph powering-based method (Jin et al., 2019) implicitly leveraged more spatial information by extending classical spectral graph theory to robust graph theory, but they concentrated on defending adversarial attacks rather than model depth. LanczosNet (Liao et al., 2019) utilized Lanczos algorithm to construct low rank approximations of the graph Laplacian and then can exploit multi-scale information. Moreover, APPNP (Approximate Personalized Propagation of Neural Predictions, (Klicpera et al., 2018)) leveraged the relationship between GCN and personalized PageRank to derive an improved global propagation scheme. Beyond these, DeepGCNs (Li et al., 2019) directly adapted residual, dense connection and dilated convolutions to GCN architecture, but it mainly focused on the task of point cloud semantic segmentation and has not demonstrated its effectiveness in typical graph tasks. Similar to our work, Deep Adaptive Graph Neural Network (DAGNN) (Liu et al., 2020) also focused on incorporating information from large receptive fields through the entanglement of representation transformation and propagation, while our work efficiently ensembles knowledge from large receptive fields in an Adaboost manner. Other related works based on global attention models (Puny et al., 2020) and sample-based methods (Zeng et al., 2019) are also helpful to construct deep graph models.
A.2 INSUFFICIENT REPRESENTATION POWER OF ADASGC
As illustrated in Figure 5, with the increasing of layers, AdaSGC with only linear transformation has insufficient representation power both in extracting knowledge from high-order neighbors and combining information from different orders of neighbors while AdaGCN exhibits a consistent improvement of performance as the layer increases.
A.3 PROOF OF PROPOSITION 1
Firstly, we further elaborate the Proposition 1 as follows, then we provide the proof.
Suppose that γ is the teleport factor. Consider the output ZPPNP = γ(I − (1 − γ)Â)−1fθ(X) in PPNP and ZAPPNP from its approxminated version APPNP. Let matrix sequence {Z(l)} be from the output of each layer l in AdaGCN, then PPNP is equivalent to the Exponential Moving Average (EMA) with exponentially decreasing factor γ, a first-order infinite impulse response filter, on {Z(l)} in a sharing parameters version, i.e., f (l)θ ≡ fθ . In addition, APPNP, which we reformulate in Eq. 10, can be viewed as the approximated form of EMA with a
limited number of terms.
ZAPPNP = (γ L−1∑ l=0 (1− γ)lÂl + (1− γ)LÂL)fθ(X) (10)
Proof. According to Neumann Theorem, ZPPNP can be expanded as a Neumann series:
ZPPNP = γ(I− (1− γ)Â)−1fθ(X)
= γ ∞∑ l=0 (1− γ)lÂlfθ(X),
where feature embedding matrix sequence {Z(l)} for each order of neighbors share the same parameters fθ . If we relax this sharing nature to the adaptive form with respect to the layer and put Âl into fθ , then the output Z can be approximately formulated as:
ZPPNP ≈ γ ∞∑ l=0 (1− γ)lf (l)θ (Â lX)
This relaxed version from PPNP is the Exponential Moving Average form of matrix sequence {Z(l)} with exponential decreasing factor γ. Moreover, if we approximate the EMA by truncating it after L− 1 items, then the weight omitted by stopping after L − 1 items is (1 − γ)L. Thus, the approximated EMA is exactly the APPNP form:
ZAPPNP = (γ L−1∑ l=0 (1− γ)lÂl + (1− γ)LÂL)fθ(X)
A.4 PROOF OF PROPOSITION 2
Proof. We consider a two layers fully-connected neural network as f in Eq. 8, then the output of AdaGCN can be formulated as:
Z = L∑ l=0 α(l)σ(ÂlXW (0))W (1)
Particularly, we set W (0) = bl sign(bl)α(l) I and W (1) = sign(bl)I where sign(bl) is the signed incidence scalar w.r.t bl. Then the output of AdaGCN can be presented as:
Z = L∑ l=0 α(l)σ(ÂlX bl sign(bl)α(l) I)sign(bl)I
= L∑ l=0 α(l)σ(ÂlX) bl sign(bl)α(l) sign(bl)
= L∑ l=0 blσ ( ÂlX ) The proof that GCNs-based methods are not capable of representing general layer-wise neighborhood mixing has been demonstrated in MixHop (Abu-El-Haija et al., 2019). Proposition 2 proved.
A.5 EXPLANATION ABOUT CONSISTENCY OF BOOSTING ON DEPENDENT DATA
Definition 2. (β-mixing sequences.) Let σji = σ(W ) = σ(Wi,Wi+1, ...,Wj) be the σ-field generated by a strictly stationary sequence of random variablesW = (Wi,Wi+1, ...,Wj). The β-mixing coefficient is defined by:
βW (n) = sup k
E sup {∣∣∣P(A|σk1)− P(A)∣∣∣ : A ∈ σ∞k+n}
Then a sequence W is called β-mixing if limn→∞βW (n) = 0. Further, it is algebraically β-mixing if there is a positive constant rβ such that βW (n) = O(n−rβ ). Definition 3. (Consistency) A classification rule is consistent for a certain distribution P if E(L(hn)) = P{hn(X) = Y } → a as n → ∞ where a is a constant. It is strongly Bayes-risk consistent if limn→∞L(hn) = a almost surely.
Under these definitions, the convergence and consistence of regularized boosting method on stationary βmixing sequences can be proved under mild assumptions. More details can be referred from (Lozano et al., 2013).
A.6 EXPERIMENTAL DETAILS
Early Stopping on AdaGCN. We apply the same early stopping mechanism across all the methods as (Klicpera et al., 2018) for fair comparison. Furthermore, boosting theory also has the capacity to perfectly incorporate early stopping and it has been shown that for several boosting algorithms including AdaBoost, this regularization via early stopping can provide guarantees of consistency (Zhang et al., 2005; Jiang et al., 2004; Bühlmann & Yu, 2003).
Dataset Splitting. We choose a training set of a fixed nodes per class, an early stopping set of 500 nodes and test set of remained nodes. Each experiment is run with 5 random initialization on each data split, leading to a total of 100 runs per experiment. On a standard setting, we randomly select 20 nodes per class. For the two different label rates on each graph, we select 6, 11 nodes per class on citeseer, 8, 16 nodes per class on Cora-ML, 7, 14 nodes per class on Pubmed and 8, 15 nodes per class on MS-Academic dataset.
Model parameters. For all GCN-based approaches, we use the same hyper-parameters in the original paper: learning rate of 0.01, 0.5 dropout rate, 5× 10−4 L2 regularization weight, and 16 hidden units. For FastGCN, we adopt the officially released code to conduct our experiments. PPNP and APPNP are adapted with best setting: K = 10 power iteration steps for APPNP, teleport probability γ = 0.1 on Cora-ML, Citeseer and Pubmed, γ = 0.2 on Ms-Academic. In addition, we use two layers with h = 64 hidden units and apply L2 regularization with λ = 5 × 10−3 on the weights of the first layer and use dropout with dropout rate d = 0.5 on both layers and the adjacency matrix. The early stopping criterion uses a patience of p = 100 and an (unreachably high) maximum of n = 10000 epochs.The implementation of AdaGCN is adapted from PPNP and APPNP. Corresponding patience p = 300 and n = 500 in the early stopping of AdaGCN. Moreover, SGC is re-implemented in a straightforward way without incorporating advanced optimization for better illustration and comparison. Other baselines are adopted the same parameters described in PPNP and APPNP.
Settings on Reddit dataset. By repeatedly tuning the parameters of these typical methods on Reddit, we finally choose weight decay rate as 10−4, hidden layer size 100 and epoch 20000 for AdaGCN. For APPNP, we opt weight decay rate as 10−5, dropout rate as 0 and epoch 500. V.GCN applies the same parameters in (Kipf & Welling, 2017) and we choose epoch as 500. All approaches have not deployed early stopping due to the expensive computational cost on the large Reddit dataset, which is also a fair comparison.
A.7 CHOICE OF THE NUMBER OF LAYERS
Different from the “forcible” behaviors in CNNs that directly stack many convolution layers, in our AdaGCN there is a theoretical guidance on the choice of model depth L, i.e., the number of base classifiers or layers, derived from boosting theory. Specifically, according to the boosting theory, the increasing of L can exponentially decreases the empirical loss, however, from the perspective of VC-dimension, an overly large L can yield overfitting of AdaGCN. It should be noted that the deeper graph convolution layers in AdaGCN are not always better, which indeed heavily depends on the the complexity of data. In practice, L can be determined via cross-validation. Specifically, we start a VC-dimension-based analysis to illustrate that too large L can yield overfitting of AdaGCN. For L layers of AdaGCN, its hypothesis set is
FL = { argmax
k ( L∑ l=1 α(l)f (l) θ ) : α(l) ∈ R, l ∈ [1, L] } (11)
Then the VC-dimension of FT can be bounded as follows in terms of the VC-dimension d of the family of base hypothesis: VCdim (FL) ≤ 2(d+ 1)(L+ 1) log2((L+ 1)e), (12) where e is a constant and the upper bounds grows as L increases. Combined with VC-dimension generalization bounds, these results imply that larger values of L can lead to overfitting of AdaBoost. This situation also happens in AdaGCN, which inspires us that there is no need to stack too many layers on AdaGCN in order to avoid overfitting. In practice, L is typically determined via cross-validation. | 1. What is the main contribution of the paper regarding graph neural networks?
2. What are the strengths and weaknesses of the proposed AdaGCN algorithm compared to other GNN variants?
3. Do you have any concerns about the clarity and organization of the paper?
4. How does the reviewer assess the performance and efficiency of the proposed method, particularly in comparison with recent algorithms like ClusterGCN and GraphSAGE?
5. Are there any issues with notation and terminology used in the paper that may cause confusion for readers? | Review | Review
By integrating Adaboosting and a fully connected layer, this paper provides a new graph neural network structure. The objective of this paper is to design a deeper graph models in an efficient way for better performance. The computational efficiency and performance of the proposed algorithm are evaluated using the task of node property prediction on several public datasets. This is a new variant of GNN, but the quality this paper is lower than the expectation regarding to the clarity and organisation.
Pros: 1. The algorithm integrated Adaboosting for graph data. Thus, AdaGCN could utilise different levels of node features for final prediction. 2. The method is optimised in a layer-wise way rather than the traditional GCN optimisation, which is similar to the optimisation of recurrent neural networks. 3. Authors compared the structure of AdaGCN with that of other GNN variants. 4. For the experiments, the proposed algorithm is more computationally efficient, and achieves better performance on the task of node property prediction. The performance of AdaGCN is slightly more robust than previous methods. The performance drop is not observed within 10 layers for AdaGCN as shown in Fig 3.
Cons: 1. Speaking of the state of art performance, GraphSAGE with LSTM also achieves a 95.4% F1 score on the Reddit dataset for node classification tasks. Thus, the authors may need to compare the training time and performance with more recent algorithms, like ClusterGCN and GraphSAGE. 2. The paper is not well written. Many typos are discovered. For example, extra space is added in the first sentence after equation 3. Meanwhile, the punctuation around equations is not consistent. For the full sentence following an equation, one would place a full stop after the equation. However, there is no full stop after equations 5, 6, 7, and 8. Abbreviations, such as "JK", "APPNP", and "PPNP", are used before introduction. 3. Some notations are confusing and misleading.
K
refers to the number of node categories, and
k
refers to a category of a node. Meanwhile,
w
i
and
W
l
have completely irrelevant definitions. 4. To evaluate the efficiency of different GCN approaches, the authors listed the per-epoch training time of methods. The implementation of GCN with different frameworks would result in the large variance of training time. It is better that the authors could include the time and memory complexity of each algorithm. 5. Fig 4, after 10 layers, it is not clear whether the linear trend would continue. This result is a bit misleading. |
ICLR | Title
Learning from others' mistakes: Avoiding dataset biases without modeling them
Abstract
State-of-the-art natural language processing (NLP) models often learn to model dataset biases and surface form correlations instead of features that target the intended underlying task. Previous work has demonstrated effective methods to circumvent these issues when knowledge of the bias is available. We consider cases where the bias issues may not be explicitly identified, and show a method for training models that learn to ignore these problematic correlations. Our approach relies on the observation that models with limited capacity primarily learn to exploit biases in the dataset. We can leverage the errors of such limited capacity models to train a more robust model in a product of experts, thus bypassing the need to hand-craft a biased model. We show the effectiveness of this method to retain improvements in out-of-distribution settings even if no particular bias is targeted by the biased model.
1 INTRODUCTION
The natural language processing community has made tremendous progress in using pre-trained language models to improve predictive accuracy (Devlin et al., 2019; Raffel et al., 2019). Models have now surpassed human performance on language understanding benchmarks such as SuperGLUE (Wang et al., 2019). However, studies have shown that these results are partially driven by these models detecting superficial cues that correlate well with labels but which may not be useful for the intended underlying task (Jia & Liang, 2017; Schwartz et al., 2017). This brittleness leads to overestimating model performance on the artificially constructed tasks and poor performance in out-of-distribution or adversarial examples.
A well-studied example of this phenomenon is the natural language inference dataset MNLI (Williams et al., 2018). The generation of this dataset led to spurious surface patterns that correlate noticeably with the labels. Poliak et al. (2018) highlight that negation words (“not”, “no”, etc.) are often associated with the contradiction label. Gururangan et al. (2018), Poliak et al. (2018), and Tsuchiya (2018) show that a model trained solely on the hypothesis, completely ignoring the intended signal, reaches strong performance. We refer to these surface patterns as dataset biases since the conditional distribution of the labels given such biased features is likely to change in examples outside the training data distribution (as formalized by He et al. (2019)).
A major challenge in representation learning for NLP is to produce models that are robust to these dataset biases. Previous work (He et al., 2019; Clark et al., 2019; Mahabadi et al., 2020) has targeted removing dataset biases by explicitly factoring them out of models. These studies explicitly construct a biased model, for instance, a hypothesis-only model for NLI experiments, and use it to improve the robustness of the main model. The core idea is to encourage the main model to find a different explanation where the biased model is wrong. During training, products-of-experts ensembling (Hinton, 2002) is used to factor out the biased model.
While these works show promising results, the assumption of knowledge of the underlying dataset bias is quite restrictive. Finding dataset biases in established datasets is a costly and time-consuming process, and may require access to private details about the annotation procedure, while actively re-
∗Supported by the Viterbi Fellowship in the Center for Computer Engineering at the Technion
ducing surface correlations in the collection process of new datasets is challenging given the number of potential biases (Zellers et al., 2019; Sakaguchi et al., 2020).
In this work, we explore methods for learning from biased datasets which do not require such an explicit formulation of the dataset biases. We first show how a model with limited capacity, which we call a weak learner, trained with a standard cross-entropy loss learns to exploit biases in the dataset. We then investigate the biases on which this weak learner relies and show that they match several previously manually identified biases. Based on this observation, we leverage such limited capacity models in a product of experts ensemble to train a more robust model and evaluate our approach in various settings ranging from toy datasets up to large crowd-sourced benchmarks: controlled synthetic bias setup (He et al., 2019; Clark et al., 2019), natural language inference (McCoy et al., 2019b), extractive question answering (Jia & Liang, 2017) and fact verification Schuster et al. (2019).
Our contributions are the following: (a) we show that weak learners are prone to relying on shallow heuristics and highlight how they rediscover previously human-identified dataset biases; (b) we demonstrate that we do not need to explicitly know or model dataset biases to train more robust models that generalize better to out-of-distribution examples; (c) we discuss the design choices for weak learners and show trade-offs between higher out-of-distribution performance at the expense of the in-distribution performance.
2 RELATED WORK
Many studies have reported dataset biases in various settings. Examples include visual question answering (Jabri et al., 2016; Zhang et al., 2016), story completion (Schwartz et al., 2017), and reading comprehension (Kaushik & Lipton, 2018; Chen et al., 2016). Towards better evaluation methods, researchers have proposed to collect “challenge” datasets that account for surface correlations a model might adopt (Jia & Liang, 2017; McCoy et al., 2019b). Standard models without specific robust training methods often drop in performance when evaluated on these challenge sets.
While these works have focused on data collection, another approach is to develop methods allowing models to ignore dataset biases during training. Several active areas of research tackle this challenge by adversarial training (Belinkov et al., 2019a;b; Stacey et al., 2020), example forgetting (Yaghoobzadeh et al., 2019) and dynamic loss adjustment (Cadène et al., 2019). Previous work (He et al., 2019; Clark et al., 2019; Mahabadi et al., 2020) has shown the effectiveness of product of experts to train un-biased models. In our work, we show that we do not need to explicitly model biases to apply these de-biasing methods and can use a more general setup than previously presented.
Orthogonal to these evaluation and optimization efforts, data augmentation has attracted interest as a way to reduce model biases by explicitly modifying the dataset distribution (Min et al., 2020; Belinkov & Bisk, 2018), either by leveraging human knowledge about dataset biases such as swapping male and female entities (Zhao et al., 2018) or by developing dynamic data collection and benchmarking (Nie et al., 2020). Our work is mostly orthogonal to these efforts and alleviates the need for a human-in-the-loop setup which is common to such data-augmentation approaches.
Large pre-trained language models have contributed to improved out-of-distribution generalization (Hendrycks et al., 2020). However, in practice, that remains a challenge in natural language processing (Linzen, 2020; Yogatama et al., 2019) and our work aims at out-of-distribution robustness without significantly compromising in-distribution performance.
Finally, in parallel the work of Utama et al. (2020) presents a related de-biasing method leveraging the mistakes of weakened models without the need to explicitly model dataset biases. Our approach is different in several ways, in particular we advocate for using limited capacity weak learner while Utama et al. (2020) uses the same architecture as the robust model trained on a few thousands examples. We investigated the trade-off between learner’s capacity and resulting performances as well as the resulting few-shot learning regime in the limit of a high capacity weak model.
3 METHOD
3.1 OVERVIEW
Our approach utilizes product of experts (Hinton, 2002) to factor dataset biases out of a learned model. We have access to a training set (xi, yi)1≤i≤N where each example xi has a label yi among K classes. We use two models fW (weak) and fM (main) which produce respective logits vectors w and m ∈ RK . The product of experts ensemble of fW and fM produces logits vector e
∀1 ≤ j ≤ K, ej = wj +mj (1) Equivalently, we have softmax(e) ∝ softmax(w) softmax(m) where is the element-wise multiplication.
Our training approach can be decomposed in two successive stages: (a) training the weak learner fW with a standard cross-entropy loss (CE) and (b) training a main (robust) model fM via product of experts (PoE) to learn from the errors of the weak learner. The core intuition of this method is to encourage the robust model to learn to make predictions that take into account the weak learner’s mistakes.
We do not make any assumption on the biases present (or not) in the dataset and rely on letting the weak learner discover them during training. Moreover, in contrast to prior work (Mahabadi et al., 2020; He et al., 2019; Clark et al., 2019) in which the weak learner had a hand-engineered bias-specific structure, our approach does not make any specific assumption on the weak learner such as its architecture, capacity, pre-training, etc. The weak learner fW is trained with standard cross-entropy.
The final goal is producing main model fM . After training, the weak model fW is frozen and used only as part of the product of experts. Since the weak model is frozen, only the main model fM receives gradient updates during training. This is similar to He et al. (2019); Clark et al. (2019) but differs from Mahabadi et al. (2020) who train both weak and main models jointly. For convenience, we refer to the cross-entropy of the prediction e of Equation 1 as the PoE cross-entropy.
3.2 ANALYSIS: THE ROBUST MODEL LEARNS FROM THE ERRORS OF THE WEAK LEARNER
To better explore the impact of PoE training with a weak learner, we consider the special case of binary classification with logistic regression. Here w and m are scalar logits and the softmax becomes a sigmoid. The loss of the product of experts for a single positive example is:
LPoE,binary = −m− w + log ( 1 + exp(m+ w) ) (2)
Logit w is a fixed value since the weak learner is frozen. We also define the entropy of the weak learner asHw = −p log(p)− (1− p) log(1− p) where p = σ(w) as our measure of certainty. Different values of w from the weak learner induce different gradient updates in the main model. Figure 1a shows the gradient update of the main model logitm. Each of the three curves corresponds to a different value of w the weak model.
• Weak Model is Certain / Incorrect: the first case (in blue) corresponds to low values of w. The entropy is low and the loss of the weak model is high. The main model receives gradients even when it is classifying the point correctly (≈ m = 5) which encourages m to compensate for the weak model’s mistake.
• Weak Model is Uncertain: the second case (in red) corresponds tow = 0 which means the weak model’s entropy is high (uniform probability over all classes). In this case, product of experts is equal to the main model, and the gradient is equal to the one obtained with cross-entropy.
• Weak Model is Certain / Correct: the third case (in green) corresponds to high values of w. The entropy is low and the loss of the weak model is low. In this case, m’s gradients are “cropped” early on and the main model receives less gradients on average. When w is extremely high, m receives no gradient (and the current example is simply ignored).
Put another way, the logit values for whichm receives gradients are shifted according the correctness and certainty of the weak model. Figure 1b shows the concentration of training examples of MNLI
(Williams et al., 2018) projected on the 2D coordinates (correctness, certainty) from a trained weak learner (described in Section 4.1). We observe that there are many examples for the 3 cases. More crucially, we verify that the group certain / incorrect is not empty since the examples in this group encourage the model to not rely on the dataset biases.
(a) Gradient update of m for different values of w on binary classification. (b) 2D projection of MNLI examples from a trained weak learner. Colors indicate the concentration and are in log scale.
Figure 1: The analysis of the gradients reveals 3 regimes where the gradient is shifted by the certainty and correctness of the weak learner. These 3 regions are present in real dataset such as MNLI.
3.3 CONNECTION TO DISTILLATION
Our product of experts setup bears similarities with knowledge distillation (Hinton et al., 2015) where a student network is trained to mimic the behavior of a frozen teacher model.
In our PoE training, we encourage the main model fM (analog to the student network) to learn from the errors of the weak model fW (analog to the teacher network): instead of mimicking, it learns an orthogonal behavior when the teacher is incorrect. To recognize errors from the weak learner, we use the gold labels which alleviates the need to use pseudo-labelling or data augmentation as it is commonly done in distillation setups (Furlanello et al., 2018; Xie et al., 2020).
Similarly to Hinton et al. (2015), our final loss is a linear combination of the original cross-entropy loss (CE) and the PoE cross-entropy. We refer to this multi-loss objective as PoE + CE.
4 EXPERIMENTS
We consider several different experimental settings that explore the use of a weak learner to isolate and train against dataset biases. All the experiments are conducted on English datasets, and follow the standard setup for BERT training. Our main model is BERT-base (Devlin et al., 2019) with 110M parameters. Except when indicated otherwise, our weak learner is a significantly smaller pre-trained masked language model known as TinyBERT (Turc et al., 2019) with 4M parameters (2 layers, hidden size of 128). The weak learner is fine-tuned on exactly the same data as our main model. For instance, when trained on MNLI, it gets a 67% accuracy on the matched development set (compared to 84% for BERT-base).
Part of our discussion relies on natural language inference, which has been widely studied in this area. The classification task is to determine whether a hypothesis statement is true (entailment), false (contradiction) or undetermined (neural) given a premise statement. MNLI (Williams et al., 2018) is the canonical large-scale English dataset to study this problem with 433K labeled examples. For evaluation, it features matched sets (examples from domains encountered in training) and mismatched sets (domains not-seen during training).
Experiments first examine qualitatively the spurious correlations picked up by the method. We then verify the validity of the method on a synthetic experimental setup. Finally, we verify the impact of our method by evaluating robust models on several out-of-distribution sets and discuss the choice of the weak learner.
4.1 WEAK LEARNERS REDISCOVER PREVIOUSLY REPORTED DATASET BIASES
Most approaches for circumventing dataset bias require modeling the bias explicitly, for example using a model limited to only the hypothesis in NLI (Gururangan et al., 2018). These approaches are effective, but require isolating specific biases present in a dataset. Since this process is costly, time consuming and error-prone, it is unrealistic to expect such analysis for all new datasets. On the contrary, we hypothesize that weak learners might operate like rapid surface learners (Zellers et al., 2019), picking up on dataset biases without specific signal or input cura-
tion and being rather certain of their biased errors (high certainty on the biased prediction errors).
We first investigate whether our weak learner re-discovers two well-known dataset biases reported on NLI benchmarks: (a) the presence of negative word in the hypothesis is highly correlated with the contradiction label (Poliak et al., 2018; Gururangan et al., 2018), (b) high word overlap between the premise and the hypothesis is highly correlated with the entailment label (McCoy et al., 2019b).
To this aim, we fine-tune a weak learner on MNLI (Williams et al., 2018). Hyper-parameters can be found in Appendix A.1. We extract and manually categorize 1,000 training examples wrongly predicted by the weak learner (with a high loss and a high certainty). Table 1 breaks them down per category. Half of these incorrect examples are wrongly predicted as Contradiction and almost all of these contain a negation1 in the hypothesis. Another half of the examples are incorrectly predicted as Entailment, a majority of these presenting a high lexical overlap between the premise and the hypothesis (5 or more words in common). The weak learner thus appears to predict with highcertainty a Contradiction label whenever the hypothesis contains a negative word and with highcertainty an Entailment label whenever there is a strong lexical overlap between premise/hypothesis. Table 6 in Appendix A.3 presents qualitative examples of dataset biases identified by the fine-tuned weak learner.
This analysis is based on a set of biases referenced in the literature and does not exclude the possibility of other biases being detected by the weak learner. For instance, during this investigation we notice that the presence of “negative sentiment” words (for instance: dull, boring) in the hypothesis appears to be often indicative of a Contradiction prediction. We leave further investigation on such behaviors to future work.
4.2 SYNTHETIC EXPERIMENT: CHEATING FEATURE
We consider a controlled synthetic experiment described in He et al. (2019); Clark et al. (2019) that simulates bias. We modify 20,000 MNLI training examples by injecting a cheating feature which encodes an example’s label with probability pcheat and a random label selected among the two incorrect labels otherwise. For simplicity, we consider the first 20,000 examples. On the evaluation sets, the cheating feature is random and does not convey any useful information. In the present experiment, the cheating feature takes the form of a prefix added to the hypothesis (“0” for Contradiction, “1” for Entailment, “2” for Neutral). We train the weak and main models on these 20,000 examples and evaluate their accuracy on the matched development set.2 We expect a biased model to rely mostly on the cheating feature thereby leading to poor evaluation performance.
Figure 2 shows the results. As the proportion of examples containing the bias increases, the evaluation accuracy of the weak learner quickly decreases to reach 0% when pcheat = 0.9. The weak
1We use the following list of negation words: no, not, none, nothing, never, aren’t, isn’t, weren’t, neither, don’t, didn’t, doesn’t, cannot, hasn’t, won’t.
2We observe similar trends on the mismatched development set.
learner detects the cheating feature during training and is mainly relying on the synthetic bias which is not directly indicative of the gold label.
Both He et al. (2019) and Clark et al. (2019) protect against the reliance on this cheating feature by ensembling the main model with a biased model that only uses the hypothesis (or its first token). We instead train the main model in the product of experts setting, relying on the weak learner to identify the bias. Figure 2 shows that when a majority of the training examples contain the bias (pcheat ≥ 0.6), the performance of the model trained with crossentropy drops faster than the one trained in PoE. PoE training leads to a more robust model by encouraging it to learn from the mistakes of the weak learner. As pcheat comes close to 1, the model’s training enters a “few-shot regime” where there are very few incorrectly predicted biased examples to learn from (examples where following the biased heuristic lead to a wrong answer) and the performance of the model trained with PoE drops as well.
4.3 ADVERSARIAL DATASETS: NLI AND QA
NLI The HANS adversarial dataset (McCoy et al., 2019b) was constructed by writing templates to generate examples with a high premise/hypothesis word overlap to attack models that rely on this bias. In one template the word overlap generates entailed premise/hypothesis pairs (heuristic-entailed examples), whereas in another the examples contradict the heuristic (nonheuristic-entailed). The dataset contains 30K evaluation examples equally split between both.
Table 2 shows that the weak learner exhibits medium performance on the in-distribution sets (MNLI) and that on out-of-distribution evaluation (HANS), it relies heavily on the word overlap heuristic. Product of experts training is effective at reducing the reliance on biases and leads to significant gains on the heuristic-non-entailed examples when compared to a model trained with standard crossentropy leading to an improvement of +24%.
The small degradation on in-distribution data is likely because product of experts training does not specialize for in-distribution performance but focuses on the weak model errors (He et al., 2019). The linear combination of the original cross-entropy loss and the product of experts loss (PoE + CE) aims at counteracting this effect. This multi-loss objective trades off out-of-distribution generalization for in-distribution accuracy. A similar trade-off between accuracy and robustness has been reported in adversarial training (Zhang et al., 2019; Tsipras et al., 2019). In Appendix A.6, we detail the influence of this multi-loss objective.
We also evaluate our method on MNLI’s hard test set (Gururangan et al., 2018) which is expected to be less biased than MNLI’s standard split. These examples are selected such that a hypothesis-only model cannot predict the label accurately. Table 2 shows the results of this experiment. Our method surpasses the performance of a PoE model trained with a hypothesis-only biased learner. Results on the mismatched set are given in Appendix A.4.
QA Question answering models often rely on heuristics such as type and keyword-matching (Weissenborn et al., 2017) that can do well on benchmarks like SQuAD (Rajpurkar et al., 2016). We evaluate on the Adversarial SQuAD dataset (Jia & Liang, 2017) built by appending distractor sentences to the passages in the original SQuAD. Distractors are constructed such that they look like a plausible answer to the question while not changing the correct answer or misleading humans.
Results on SQuAD v1.1 and Adversarial SQuAD are listed in Table 3. The weak learner alone has low performance both on in-distribution and adversarial sets. PoE training improves the adversarial performance (+1% on AddSent) while sacrificing some in-distribution performance. A multi-loss
optimization closes the gap and even boosts adversarial robustness (+3% on AddSent and +2% on AddOneSent). In contrast to our experiments on MNLI/HANS, multi-loss training thus leads here to better performance on out-of-distribution as well. We hypothesize that in this dataset, the weak learner picks up more useful information and removing it entirely might be non-optimal. Multi-loss in this case allows us to strike a balance between learning from, or removing, the weak learner.
5 ANALYSIS
5.1 REDUCING BIAS: CORRELATION ANALYSIS
To investigate the behavior of the ensemble of the weak and main learner, we compute the Pearson correlation between the element-wise loss of the weak (biased) learner and the loss of the trained models following Mahabadi et al. (2020). A correlation of 1 indicates a linear relation between the per-example losses (the two learners make the same mistakes), and 0 indicates the absence of linear correlation (models’ mistakes are uncorrelated). Figure 3 shows that models trained with a linear combination of the PoE cross-entropy and the standard cross-entropy have a higher correlation than when trained solely with PoE. This confirms that PoE training is effective at reducing biases uncovered by the weak learner and re-emphasizes that adding standard cross-entropy leads to a trade-off between the two.
5.2 HOW WEAK DO THE WEAK LEARNERS NEED TO BE?
We consider parameter size as a measure of the capacity or ”weakness” of the weak learner. We fine-tune different sizes of BERT (Turc et al., 2019) ranging from 4.4 to 41.4 million parameters and use these as weak models in a PoE setting. Figure 4b shows the accuracies on MNLI and HANS of the weak learners and main models trained with various weak learners.
Varying the capacity of the weak models affects both in-distribution and out-ofdistribution performance. Out-of-distribution performance of the main model increases as the weak model becomes stronger (more parameters) up to a certain point while in-distribution performances drop slightly at first and then more strongly. When trained jointly with the larger MediumBERT weak learner (41.4 million parameters), the main model gets 97% accuracy on HANS’s heuristic-non-entailed set but a very low accuracy on the in-distribution examples (28% on MNLI and 3% on the heuristic-entailed examples).
As a weak model grows in capacity, it becomes a better learner. The average loss decreases and the model becomes more confident in its predictions. As a result, the group
certain / correct becomes more populated and the main model receives on average a smaller gradient magnitude per input. On the contrary, the certain / incorrect group (which generally align with out-of-distribution samples and induce higher magnitude gradient updates, encouraging generalization at the expense of in-distribution performance) becomes less populated. These results corroborate and complement insights from Yaghoobzadeh et al. (2019). This is also reminiscent of findings from Vodrahalli et al. (2018) and Shrivastava et al. (2016): not all training samples contribute equally towards learning and in some cases, a carefully selected subset of the training set is sufficient to match (or surpass) the performance on the whole set.
5.3 DE-BIASING IS STILL EFFECTIVE WHEN DATASET BIASES ARE UNKNOWN OR HARD TO DETECT
While it is difficult to enumerate all sources of bias, we focus in this work on superficial cues that correlate with the label in the training set but do not transfer. These superficial cues correlate with what can be captured by a weak model. For instance, Conneau et al. (2018) suggest that word presence can be detected with very shallow networks (linear classifier on top of FastText bag of words) as they show very high accuracy for Word Content, the probing task of detecting which of the 1’000 target words is present in a given sentence.
To verify that a weak model is still effective with unknown or hard to detect biases, we consider an example where the bias is only present in a small portion of the training. We remove from the MNLI training set all the examples (192K) that exhibit one of the two biases detailed in Section 4.1: high word overlap between premise and hypothesis with entailment label; and negation in the hypothesis with contradiction label. We are left with 268K training examples.
We apply our de-biasing method with these examples as our training set. For comparison, we train a main model with standard cross-entropy on the same subset of selected examples. Our results are shown in Table 4 and confirm on HANS that our de-biasing method is still effective even when the bias is hard to detect. Note that the accuracies on MNLI can not be directly compared to results in Table 2: the class imbalance in the selected subset of examples lead to a harder optimization problem explaining the difference of performance.
We present complementary analyses in Appendix. To further show the effectiveness of our method, we included in Appendix A.2 an additional experiment on facts verification (Thorne et al., 2018; Schuster et al., 2019). In Appendix A.5, we evaluate the ability of our method to generalize to other domains that do not share the same annotation artifacts. We highlight the trade-off between in-distribution performance and out-of-distribution robustness by quantifying the influence of the multi-loss objective in Appendix A.6 and draw a connection between our 3 groups of examples and recently introduced Data Maps (Swayamdipta et al., 2020).
6 CONCLUSION
We have presented an effective method for training models robust to dataset biases. Leveraging a weak learner with limited capacity and a modified product of experts training setup, we show that dataset biases do not need to be explicitly known or modeled to be able to train models that can generalize significantly better to out-of-distribution examples. We discuss the design choices for such weak learner and investigate how using higher-capacity learners leads to higher out-ofdistribution performance and a trade-off with in-distribution performance. We believe that such approaches capable of automatically identifying and mitigating datasets bias will be essential tools for future bias-discovery and mitigation techniques.
ACKNOWLEDGEMENTS
This research was supported by the ISRAEL SCIENCE FOUNDATION (grant No. 448/20).
A APPENDIX
A.1 EXPERIMENTAL SETUP AND FINE-TUNING HYPER-PARAMETERS
Our code is based on the Hugging Face Transformers library (Wolf et al., 2019). All of our experiments are conducted on single 16GB V100 using half-precision training for speed.
NLI We fine-tuned a pre-trained TinyBERT (Turc et al., 2019) as our weak learner. We use the following hyper-parameters: 3 epochs of training with a learning rate of 3e−5, and a batch size of 32. The learning rate is linearly increased for 2000 warming steps and linearly decreased to 0 afterward. We use an Adam optimizer β = (0.9, 0.999), = 1e−8 and add a weight decay of 0.1. Our robust model is BERT-base-uncased and uses the same hyper-parameters. When we train a robust model with a multi-loss objective, we give the standard CE a weight of 0.3 and the PoE cross-entropy a weight of 1.0. Because of the high variance on HANS (McCoy et al., 2019a), we average numbers on 6 runs with different seeds.
SQuAD We fine-tuned a pre-trained TinyBERT (Turc et al., 2019) as our weak learner. We use the following hyper-parameters: 3 epochs of training with a learning rate of 3e−5, and a batch size of 16. The learning rate is linearly increased for 1500 warming steps and linearly decreased to 0 afterward. We use an Adam optimizer β = (0.9, 0.999), = 1e−8 and add a weight decay of 0.1. Our robust model is BERT-base-uncased and uses the same hyper-parameters. When we train a robust model with a multi-loss objective, we give the standard CE a weight of 0.3 and the PoE cross-entropy a weight of 1.0.
Fact verification We fine-tuned a pre-trained TinyBERT (Turc et al., 2019) as our weak learner. We use the following hyper-parameters: 3 epochs of training with a learning rate of 2e−5, and a batch size of 32. The learning rate is linearly increased for 1000 warming steps and linearly decreased to 0 afterward. We use an Adam optimizer β = (0.9, 0.999), = 1e−8 and add a weight decay of 0.1. Our robust model is BERT-base-uncased and uses the same hyper-parameters. When we train a robust model with a multi-loss objective, we give the standard CE a weight of 0.3 and the PoE cross-entropy a weight of 1.0. We average numbers on 6 runs with different seeds.
A.2 ADDITIONAL EXPERIMENT: FACT VERIFICATION
Following Mahabadi et al. (2020), we also experiment on a fact verification dataset. The FEVER dataset (Thorne et al., 2018) contains claim-evidences pairs generated from Wikipedia. Schuster et al. (2019) collected a new evaluation set for the FEVER dataset to avoid the biases observed in the claims of the benchmark. The authors symmetrically augment the claim-evidences pairs of the FEVER evaluation to balance the detected artifacts such that solely relying on statistical cues in claims would lead to a random guess. The collected dataset is challenging, and the performance of the models relying on biases evaluated on this dataset drops significantly.
Our results are in Table 5. Our method is again effective at removing potential biases present in the training set and shows strong improvements on the symmetric test set.
A.3 SOME EXAMPLES OF DATASET BIASES DETECTED BY THE WEAK LEARNER
In Table 6, we show a few qualitative examples of dataset biases detected by a fine-tuned weak learner.
A.4 DETAILED RESULTS ON HANS AND MNLI MISMATCHED
We report the detailed results per heuristics on HANS in Table 7 and the results on the mismatched hard test set of MNLI in Table 8.
A.5 NLI: TRANSFER EXPERIMENTS
As highlighted in Section 4.3, our method is effective at improving robustness to adversarial settings that specifically target dataset biases. We further evaluate how well our method improves
generalization to domains that do not share the same annotation artifacts. Mahabadi et al. (2020) highlighted that product of experts is effective at improving generalization to other NLI benchmarks when trained on SNLI (Bowman et al., 2015). We follow the same setup: notably, we perform a sweep on the weight of the cross-entropy in our multi-loss objective and perform model selection on the development set of each dataset. We evaluate on SciTail (Khot et al., 2018), GLUE benchmark’s diagnostic test (Wang et al., 2018), AddOneRTE (AddOne) (Pavlick & Callison-Burch, 2016), Definite Pronoun Resolution (DPR) (Rahman & Ng, 2012), FrameNet+ (FN+) (Pavlick et al., 2015) and Semantic Proto-Roles (SPR) (Reisinger et al., 2015). We also evaluate on the hard SNLI test set (Gururangan et al., 2018), which is a set where a hypothesis-only model cannot solve easily.
Table 9 shows the results. Without explicitly modeling the bias in the dataset, our method matches or surpasses the generalization performance previously reported, at the exception of GLUE’s diagnostic dataset and SNLI’s hard test set. Moreover, we notice that the multi-loss objective sometimes leads to a stronger performance, which suggests that in some cases, it can be sub-optimal to completely remove the information picked up by the weak learner. We hypothesize that the multi-loss objective balances the emphasis on domain-specific features (favoring in-distribution performance) and their removal through de-biasing (benefiting domain transfer performance). This might explain why we do not observe improvements on SNLI’s hard test set and GLUE’s diagnostic set in the PoE setting.
A.6 INFLUENCE OF MULTI-LOSS OBJECTIVE
Our best performing setup features a linear combination of the PoE cross-entropy and the standard cross-entropy. We fix the weight of the PoE cross-entropy to 1. and modulate the linear coefficient α of the standard cross-entropy. Figure 5 shows the influence of this multi-loss objective. As the weight of the standard cross-entropy increases, the in-distribution performance increases while the out-of-distribution performance decreases. This effect is particularly noticeable on MNLI/HANS (see Figure 5a). Surprisingly, this trade-off is less pronounced on SQuAD/Adversarial SQuAD: the F1 development score increases from 85.43 for α = 0.1 to 88.14 for α = 1.9 while decreasing from 56.67 to 55.06 on AddSent.
Our multi-loss objective is similar to the annealing mechanism proposed in (Utama et al., 2020). In fact, as the annealing coefficient decreases, the modified probability distribution of the weak model converges to the uniform distribution. As seen in Section 3.2, when the distribution of the weak model is close to the uniform distribution (high-uncertainty), the gradient of the loss for PoE is equivalent to the gradient of the main model trained without PoE (i.e. the standard cross-entropy). In this work, we consider a straight-forward setup where we linearly combine the two losses throughout the training with fixed coefficients.
A.7 CONNECTION TO Data Maps SWAYAMDIPTA ET AL. (2020)
We hypothesize that our three identified groups (certain / incorrect, certain / correct, and uncertain) overlap with the regions identified by data cartographies (Swayamdipta et al., 2020). The authors project each training example onto 2D coordinates: confidence, variability. The first one is the mean of the gold label probabilities predicted for each example across training epochs. The second one is the standard deviation. Confidence is closely related to the loss (intuitively, a high-
confidence example is “easier” to predict). Variability is connected to the uncertainty (the probability of the true class for high-variability examples fluctuates during the training underlying the model’s indecisiveness).
Our most interesting group (certain / incorrect) bears similarity with the ambiguous region (the model is indecisive about these instances frequently changing its prediction across training epochs) and the hard-to-learn region (which contains a significant proportion of mislabeled examples). The authors observe that the examples in these 2 regions play an important role in out-of-distribution generalization. Our findings go in the same direction as the weak model encourages the robust model to pay a closer look at these certain / incorrect examples during training.
To verify this claim, we follow their procedure and log the training dynamics of our weak learner trained on MNLI. Figure 6 (left) shows the Data Map obtained with our weak learner. For each of our 3 groups, we select the 10,000 examples that emphasized the most the characteristic of the group. For instance, for uncertain, we take the 10,000 examples with the highest entropy. In Figure 6 (right) we plot these 30,000 examples onto the Data Map. We observe that our certain / correct examples are in the easy-to-learn region, our certain / incorrect examples are in the hardto-learn region and our uncertain examples are mostly in the ambiguous region.
Conversely, we verify that the examples in the ambiguous region are mostly in our uncertain group, the examples from the hard-to-learn are mostly in uncertain and certain / incorrect, and the examples from the easy-to-learn are mainly in certain / correct and uncertain. | 1. What is the main contribution of the paper regarding training models that are robust to spurious correlations?
2. What are the strengths and weaknesses of the proposed approach compared to prior works?
3. How does the reviewer assess the effectiveness and applicability of the method in various settings, particularly when the bias is hard to detect or learn?
4. What are some concerns regarding the comparisons made in the paper, especially with respect to the base model used in Clark et al?
5. How can one better quantify what useful information is being learned by the weak learner, and is it possible to modulate the learnability of the bias?
6. Are there any suggestions for additional experiments or modifications to the method to improve its applicability and effectiveness in different scenarios? | Review | Review
Summary: This paper proposes a method for training model that are robust to spurious correlations, building upon prior work that uses product-of-experts and a model explicitly trained on a dataset bias (e.g., a hypothesis-only model). Instead of using a model explicitly trained to learn the dataset bias, the authors use a “weak learner” with limited capacity. Then, this model is used in the PoE setting as in past work. The advantage of this method is that a model developer doesn’t need to know that a bias exists, since the hope is that the weak learner will implicitly learn the bias.
Strengths: A thorough study of using a limited-capacity auxiliary model to train more robust models, which helps a final model ignore spurious correlations that are easy to learn.
Weaknesses: The work is a rather straightforward extension of prior work. Furthermore, the authors only evaluate on 2 textual tasks---I would have liked to see more experiments with spurious correlations in vision (e.g., VQA or the datasets used in https://openreview.net/forum?id=ryxGuJrFvS), and other experiments on text (e.g., the TriviaQA-CP dataset in the Clark paper). As is, it’s hard to glean how broadly applicable this method actually is. I would have also liked to see more of a comparison with methods that use known bias (e.g., Clark et al or He et al)---it seems like some of the comparisons in the table aren’t completely fair.
Recommendation: 6 . I think this paper is a potentially-useful extension of a prior method, but I’m still somewhat unconvinced that this method is applicable in settings where the bias is hard to detect, which is what we really care about (since, if the bias is easy to detect, we can use Clark et al and other methods).
Comments and Questions:
The comparisons to Clark et al aren’t fair comparisons for adversarial SQuAD, since the Clark et al paper uses a different base model for adversarial SQuAD (modifed BIDAF).
The weak learner is a rather blunt instrument. It picks up dataset biases, but it also likely picks up features that are actually useful---not all robust features have to be difficult to learn. Is it possible to better quantify what useful information is being learned (and subsequently thrown out) by the weak learner? This would make it easier to determine if using it is worthwhile.
While it’s true that the weak model empirically learns to re-learn the same dataset biases targeted in prior work (e.g., negation correlates with contradiction), it’s somewhat unclear to me how well this method would translate to a setting with unknown biases. The MNLI / SQuAD examples are a bit artificial since we already have knowledge of the bias---it’s possible that weak learners can pick up on spurious features that are “easy to learn”, which are the same ones that humans notice. I’d like to see whether this method applies well to tasks where it isn’t immediately obvious that the bias is easy to learn; perhaps a synthetic experiment would be useful here. Is it possible to modulate the learnability of the bias? The synthetic experiments in the paper suggest that for cases the bias is hard to learn, this method isn’t very effective, which makes sense---in how many of the cases in the literature is the bias hard to learn? This is another reason why I think more experiments would be useful. |
ICLR | Title
Learning from others' mistakes: Avoiding dataset biases without modeling them
Abstract
State-of-the-art natural language processing (NLP) models often learn to model dataset biases and surface form correlations instead of features that target the intended underlying task. Previous work has demonstrated effective methods to circumvent these issues when knowledge of the bias is available. We consider cases where the bias issues may not be explicitly identified, and show a method for training models that learn to ignore these problematic correlations. Our approach relies on the observation that models with limited capacity primarily learn to exploit biases in the dataset. We can leverage the errors of such limited capacity models to train a more robust model in a product of experts, thus bypassing the need to hand-craft a biased model. We show the effectiveness of this method to retain improvements in out-of-distribution settings even if no particular bias is targeted by the biased model.
1 INTRODUCTION
The natural language processing community has made tremendous progress in using pre-trained language models to improve predictive accuracy (Devlin et al., 2019; Raffel et al., 2019). Models have now surpassed human performance on language understanding benchmarks such as SuperGLUE (Wang et al., 2019). However, studies have shown that these results are partially driven by these models detecting superficial cues that correlate well with labels but which may not be useful for the intended underlying task (Jia & Liang, 2017; Schwartz et al., 2017). This brittleness leads to overestimating model performance on the artificially constructed tasks and poor performance in out-of-distribution or adversarial examples.
A well-studied example of this phenomenon is the natural language inference dataset MNLI (Williams et al., 2018). The generation of this dataset led to spurious surface patterns that correlate noticeably with the labels. Poliak et al. (2018) highlight that negation words (“not”, “no”, etc.) are often associated with the contradiction label. Gururangan et al. (2018), Poliak et al. (2018), and Tsuchiya (2018) show that a model trained solely on the hypothesis, completely ignoring the intended signal, reaches strong performance. We refer to these surface patterns as dataset biases since the conditional distribution of the labels given such biased features is likely to change in examples outside the training data distribution (as formalized by He et al. (2019)).
A major challenge in representation learning for NLP is to produce models that are robust to these dataset biases. Previous work (He et al., 2019; Clark et al., 2019; Mahabadi et al., 2020) has targeted removing dataset biases by explicitly factoring them out of models. These studies explicitly construct a biased model, for instance, a hypothesis-only model for NLI experiments, and use it to improve the robustness of the main model. The core idea is to encourage the main model to find a different explanation where the biased model is wrong. During training, products-of-experts ensembling (Hinton, 2002) is used to factor out the biased model.
While these works show promising results, the assumption of knowledge of the underlying dataset bias is quite restrictive. Finding dataset biases in established datasets is a costly and time-consuming process, and may require access to private details about the annotation procedure, while actively re-
∗Supported by the Viterbi Fellowship in the Center for Computer Engineering at the Technion
ducing surface correlations in the collection process of new datasets is challenging given the number of potential biases (Zellers et al., 2019; Sakaguchi et al., 2020).
In this work, we explore methods for learning from biased datasets which do not require such an explicit formulation of the dataset biases. We first show how a model with limited capacity, which we call a weak learner, trained with a standard cross-entropy loss learns to exploit biases in the dataset. We then investigate the biases on which this weak learner relies and show that they match several previously manually identified biases. Based on this observation, we leverage such limited capacity models in a product of experts ensemble to train a more robust model and evaluate our approach in various settings ranging from toy datasets up to large crowd-sourced benchmarks: controlled synthetic bias setup (He et al., 2019; Clark et al., 2019), natural language inference (McCoy et al., 2019b), extractive question answering (Jia & Liang, 2017) and fact verification Schuster et al. (2019).
Our contributions are the following: (a) we show that weak learners are prone to relying on shallow heuristics and highlight how they rediscover previously human-identified dataset biases; (b) we demonstrate that we do not need to explicitly know or model dataset biases to train more robust models that generalize better to out-of-distribution examples; (c) we discuss the design choices for weak learners and show trade-offs between higher out-of-distribution performance at the expense of the in-distribution performance.
2 RELATED WORK
Many studies have reported dataset biases in various settings. Examples include visual question answering (Jabri et al., 2016; Zhang et al., 2016), story completion (Schwartz et al., 2017), and reading comprehension (Kaushik & Lipton, 2018; Chen et al., 2016). Towards better evaluation methods, researchers have proposed to collect “challenge” datasets that account for surface correlations a model might adopt (Jia & Liang, 2017; McCoy et al., 2019b). Standard models without specific robust training methods often drop in performance when evaluated on these challenge sets.
While these works have focused on data collection, another approach is to develop methods allowing models to ignore dataset biases during training. Several active areas of research tackle this challenge by adversarial training (Belinkov et al., 2019a;b; Stacey et al., 2020), example forgetting (Yaghoobzadeh et al., 2019) and dynamic loss adjustment (Cadène et al., 2019). Previous work (He et al., 2019; Clark et al., 2019; Mahabadi et al., 2020) has shown the effectiveness of product of experts to train un-biased models. In our work, we show that we do not need to explicitly model biases to apply these de-biasing methods and can use a more general setup than previously presented.
Orthogonal to these evaluation and optimization efforts, data augmentation has attracted interest as a way to reduce model biases by explicitly modifying the dataset distribution (Min et al., 2020; Belinkov & Bisk, 2018), either by leveraging human knowledge about dataset biases such as swapping male and female entities (Zhao et al., 2018) or by developing dynamic data collection and benchmarking (Nie et al., 2020). Our work is mostly orthogonal to these efforts and alleviates the need for a human-in-the-loop setup which is common to such data-augmentation approaches.
Large pre-trained language models have contributed to improved out-of-distribution generalization (Hendrycks et al., 2020). However, in practice, that remains a challenge in natural language processing (Linzen, 2020; Yogatama et al., 2019) and our work aims at out-of-distribution robustness without significantly compromising in-distribution performance.
Finally, in parallel the work of Utama et al. (2020) presents a related de-biasing method leveraging the mistakes of weakened models without the need to explicitly model dataset biases. Our approach is different in several ways, in particular we advocate for using limited capacity weak learner while Utama et al. (2020) uses the same architecture as the robust model trained on a few thousands examples. We investigated the trade-off between learner’s capacity and resulting performances as well as the resulting few-shot learning regime in the limit of a high capacity weak model.
3 METHOD
3.1 OVERVIEW
Our approach utilizes product of experts (Hinton, 2002) to factor dataset biases out of a learned model. We have access to a training set (xi, yi)1≤i≤N where each example xi has a label yi among K classes. We use two models fW (weak) and fM (main) which produce respective logits vectors w and m ∈ RK . The product of experts ensemble of fW and fM produces logits vector e
∀1 ≤ j ≤ K, ej = wj +mj (1) Equivalently, we have softmax(e) ∝ softmax(w) softmax(m) where is the element-wise multiplication.
Our training approach can be decomposed in two successive stages: (a) training the weak learner fW with a standard cross-entropy loss (CE) and (b) training a main (robust) model fM via product of experts (PoE) to learn from the errors of the weak learner. The core intuition of this method is to encourage the robust model to learn to make predictions that take into account the weak learner’s mistakes.
We do not make any assumption on the biases present (or not) in the dataset and rely on letting the weak learner discover them during training. Moreover, in contrast to prior work (Mahabadi et al., 2020; He et al., 2019; Clark et al., 2019) in which the weak learner had a hand-engineered bias-specific structure, our approach does not make any specific assumption on the weak learner such as its architecture, capacity, pre-training, etc. The weak learner fW is trained with standard cross-entropy.
The final goal is producing main model fM . After training, the weak model fW is frozen and used only as part of the product of experts. Since the weak model is frozen, only the main model fM receives gradient updates during training. This is similar to He et al. (2019); Clark et al. (2019) but differs from Mahabadi et al. (2020) who train both weak and main models jointly. For convenience, we refer to the cross-entropy of the prediction e of Equation 1 as the PoE cross-entropy.
3.2 ANALYSIS: THE ROBUST MODEL LEARNS FROM THE ERRORS OF THE WEAK LEARNER
To better explore the impact of PoE training with a weak learner, we consider the special case of binary classification with logistic regression. Here w and m are scalar logits and the softmax becomes a sigmoid. The loss of the product of experts for a single positive example is:
LPoE,binary = −m− w + log ( 1 + exp(m+ w) ) (2)
Logit w is a fixed value since the weak learner is frozen. We also define the entropy of the weak learner asHw = −p log(p)− (1− p) log(1− p) where p = σ(w) as our measure of certainty. Different values of w from the weak learner induce different gradient updates in the main model. Figure 1a shows the gradient update of the main model logitm. Each of the three curves corresponds to a different value of w the weak model.
• Weak Model is Certain / Incorrect: the first case (in blue) corresponds to low values of w. The entropy is low and the loss of the weak model is high. The main model receives gradients even when it is classifying the point correctly (≈ m = 5) which encourages m to compensate for the weak model’s mistake.
• Weak Model is Uncertain: the second case (in red) corresponds tow = 0 which means the weak model’s entropy is high (uniform probability over all classes). In this case, product of experts is equal to the main model, and the gradient is equal to the one obtained with cross-entropy.
• Weak Model is Certain / Correct: the third case (in green) corresponds to high values of w. The entropy is low and the loss of the weak model is low. In this case, m’s gradients are “cropped” early on and the main model receives less gradients on average. When w is extremely high, m receives no gradient (and the current example is simply ignored).
Put another way, the logit values for whichm receives gradients are shifted according the correctness and certainty of the weak model. Figure 1b shows the concentration of training examples of MNLI
(Williams et al., 2018) projected on the 2D coordinates (correctness, certainty) from a trained weak learner (described in Section 4.1). We observe that there are many examples for the 3 cases. More crucially, we verify that the group certain / incorrect is not empty since the examples in this group encourage the model to not rely on the dataset biases.
(a) Gradient update of m for different values of w on binary classification. (b) 2D projection of MNLI examples from a trained weak learner. Colors indicate the concentration and are in log scale.
Figure 1: The analysis of the gradients reveals 3 regimes where the gradient is shifted by the certainty and correctness of the weak learner. These 3 regions are present in real dataset such as MNLI.
3.3 CONNECTION TO DISTILLATION
Our product of experts setup bears similarities with knowledge distillation (Hinton et al., 2015) where a student network is trained to mimic the behavior of a frozen teacher model.
In our PoE training, we encourage the main model fM (analog to the student network) to learn from the errors of the weak model fW (analog to the teacher network): instead of mimicking, it learns an orthogonal behavior when the teacher is incorrect. To recognize errors from the weak learner, we use the gold labels which alleviates the need to use pseudo-labelling or data augmentation as it is commonly done in distillation setups (Furlanello et al., 2018; Xie et al., 2020).
Similarly to Hinton et al. (2015), our final loss is a linear combination of the original cross-entropy loss (CE) and the PoE cross-entropy. We refer to this multi-loss objective as PoE + CE.
4 EXPERIMENTS
We consider several different experimental settings that explore the use of a weak learner to isolate and train against dataset biases. All the experiments are conducted on English datasets, and follow the standard setup for BERT training. Our main model is BERT-base (Devlin et al., 2019) with 110M parameters. Except when indicated otherwise, our weak learner is a significantly smaller pre-trained masked language model known as TinyBERT (Turc et al., 2019) with 4M parameters (2 layers, hidden size of 128). The weak learner is fine-tuned on exactly the same data as our main model. For instance, when trained on MNLI, it gets a 67% accuracy on the matched development set (compared to 84% for BERT-base).
Part of our discussion relies on natural language inference, which has been widely studied in this area. The classification task is to determine whether a hypothesis statement is true (entailment), false (contradiction) or undetermined (neural) given a premise statement. MNLI (Williams et al., 2018) is the canonical large-scale English dataset to study this problem with 433K labeled examples. For evaluation, it features matched sets (examples from domains encountered in training) and mismatched sets (domains not-seen during training).
Experiments first examine qualitatively the spurious correlations picked up by the method. We then verify the validity of the method on a synthetic experimental setup. Finally, we verify the impact of our method by evaluating robust models on several out-of-distribution sets and discuss the choice of the weak learner.
4.1 WEAK LEARNERS REDISCOVER PREVIOUSLY REPORTED DATASET BIASES
Most approaches for circumventing dataset bias require modeling the bias explicitly, for example using a model limited to only the hypothesis in NLI (Gururangan et al., 2018). These approaches are effective, but require isolating specific biases present in a dataset. Since this process is costly, time consuming and error-prone, it is unrealistic to expect such analysis for all new datasets. On the contrary, we hypothesize that weak learners might operate like rapid surface learners (Zellers et al., 2019), picking up on dataset biases without specific signal or input cura-
tion and being rather certain of their biased errors (high certainty on the biased prediction errors).
We first investigate whether our weak learner re-discovers two well-known dataset biases reported on NLI benchmarks: (a) the presence of negative word in the hypothesis is highly correlated with the contradiction label (Poliak et al., 2018; Gururangan et al., 2018), (b) high word overlap between the premise and the hypothesis is highly correlated with the entailment label (McCoy et al., 2019b).
To this aim, we fine-tune a weak learner on MNLI (Williams et al., 2018). Hyper-parameters can be found in Appendix A.1. We extract and manually categorize 1,000 training examples wrongly predicted by the weak learner (with a high loss and a high certainty). Table 1 breaks them down per category. Half of these incorrect examples are wrongly predicted as Contradiction and almost all of these contain a negation1 in the hypothesis. Another half of the examples are incorrectly predicted as Entailment, a majority of these presenting a high lexical overlap between the premise and the hypothesis (5 or more words in common). The weak learner thus appears to predict with highcertainty a Contradiction label whenever the hypothesis contains a negative word and with highcertainty an Entailment label whenever there is a strong lexical overlap between premise/hypothesis. Table 6 in Appendix A.3 presents qualitative examples of dataset biases identified by the fine-tuned weak learner.
This analysis is based on a set of biases referenced in the literature and does not exclude the possibility of other biases being detected by the weak learner. For instance, during this investigation we notice that the presence of “negative sentiment” words (for instance: dull, boring) in the hypothesis appears to be often indicative of a Contradiction prediction. We leave further investigation on such behaviors to future work.
4.2 SYNTHETIC EXPERIMENT: CHEATING FEATURE
We consider a controlled synthetic experiment described in He et al. (2019); Clark et al. (2019) that simulates bias. We modify 20,000 MNLI training examples by injecting a cheating feature which encodes an example’s label with probability pcheat and a random label selected among the two incorrect labels otherwise. For simplicity, we consider the first 20,000 examples. On the evaluation sets, the cheating feature is random and does not convey any useful information. In the present experiment, the cheating feature takes the form of a prefix added to the hypothesis (“0” for Contradiction, “1” for Entailment, “2” for Neutral). We train the weak and main models on these 20,000 examples and evaluate their accuracy on the matched development set.2 We expect a biased model to rely mostly on the cheating feature thereby leading to poor evaluation performance.
Figure 2 shows the results. As the proportion of examples containing the bias increases, the evaluation accuracy of the weak learner quickly decreases to reach 0% when pcheat = 0.9. The weak
1We use the following list of negation words: no, not, none, nothing, never, aren’t, isn’t, weren’t, neither, don’t, didn’t, doesn’t, cannot, hasn’t, won’t.
2We observe similar trends on the mismatched development set.
learner detects the cheating feature during training and is mainly relying on the synthetic bias which is not directly indicative of the gold label.
Both He et al. (2019) and Clark et al. (2019) protect against the reliance on this cheating feature by ensembling the main model with a biased model that only uses the hypothesis (or its first token). We instead train the main model in the product of experts setting, relying on the weak learner to identify the bias. Figure 2 shows that when a majority of the training examples contain the bias (pcheat ≥ 0.6), the performance of the model trained with crossentropy drops faster than the one trained in PoE. PoE training leads to a more robust model by encouraging it to learn from the mistakes of the weak learner. As pcheat comes close to 1, the model’s training enters a “few-shot regime” where there are very few incorrectly predicted biased examples to learn from (examples where following the biased heuristic lead to a wrong answer) and the performance of the model trained with PoE drops as well.
4.3 ADVERSARIAL DATASETS: NLI AND QA
NLI The HANS adversarial dataset (McCoy et al., 2019b) was constructed by writing templates to generate examples with a high premise/hypothesis word overlap to attack models that rely on this bias. In one template the word overlap generates entailed premise/hypothesis pairs (heuristic-entailed examples), whereas in another the examples contradict the heuristic (nonheuristic-entailed). The dataset contains 30K evaluation examples equally split between both.
Table 2 shows that the weak learner exhibits medium performance on the in-distribution sets (MNLI) and that on out-of-distribution evaluation (HANS), it relies heavily on the word overlap heuristic. Product of experts training is effective at reducing the reliance on biases and leads to significant gains on the heuristic-non-entailed examples when compared to a model trained with standard crossentropy leading to an improvement of +24%.
The small degradation on in-distribution data is likely because product of experts training does not specialize for in-distribution performance but focuses on the weak model errors (He et al., 2019). The linear combination of the original cross-entropy loss and the product of experts loss (PoE + CE) aims at counteracting this effect. This multi-loss objective trades off out-of-distribution generalization for in-distribution accuracy. A similar trade-off between accuracy and robustness has been reported in adversarial training (Zhang et al., 2019; Tsipras et al., 2019). In Appendix A.6, we detail the influence of this multi-loss objective.
We also evaluate our method on MNLI’s hard test set (Gururangan et al., 2018) which is expected to be less biased than MNLI’s standard split. These examples are selected such that a hypothesis-only model cannot predict the label accurately. Table 2 shows the results of this experiment. Our method surpasses the performance of a PoE model trained with a hypothesis-only biased learner. Results on the mismatched set are given in Appendix A.4.
QA Question answering models often rely on heuristics such as type and keyword-matching (Weissenborn et al., 2017) that can do well on benchmarks like SQuAD (Rajpurkar et al., 2016). We evaluate on the Adversarial SQuAD dataset (Jia & Liang, 2017) built by appending distractor sentences to the passages in the original SQuAD. Distractors are constructed such that they look like a plausible answer to the question while not changing the correct answer or misleading humans.
Results on SQuAD v1.1 and Adversarial SQuAD are listed in Table 3. The weak learner alone has low performance both on in-distribution and adversarial sets. PoE training improves the adversarial performance (+1% on AddSent) while sacrificing some in-distribution performance. A multi-loss
optimization closes the gap and even boosts adversarial robustness (+3% on AddSent and +2% on AddOneSent). In contrast to our experiments on MNLI/HANS, multi-loss training thus leads here to better performance on out-of-distribution as well. We hypothesize that in this dataset, the weak learner picks up more useful information and removing it entirely might be non-optimal. Multi-loss in this case allows us to strike a balance between learning from, or removing, the weak learner.
5 ANALYSIS
5.1 REDUCING BIAS: CORRELATION ANALYSIS
To investigate the behavior of the ensemble of the weak and main learner, we compute the Pearson correlation between the element-wise loss of the weak (biased) learner and the loss of the trained models following Mahabadi et al. (2020). A correlation of 1 indicates a linear relation between the per-example losses (the two learners make the same mistakes), and 0 indicates the absence of linear correlation (models’ mistakes are uncorrelated). Figure 3 shows that models trained with a linear combination of the PoE cross-entropy and the standard cross-entropy have a higher correlation than when trained solely with PoE. This confirms that PoE training is effective at reducing biases uncovered by the weak learner and re-emphasizes that adding standard cross-entropy leads to a trade-off between the two.
5.2 HOW WEAK DO THE WEAK LEARNERS NEED TO BE?
We consider parameter size as a measure of the capacity or ”weakness” of the weak learner. We fine-tune different sizes of BERT (Turc et al., 2019) ranging from 4.4 to 41.4 million parameters and use these as weak models in a PoE setting. Figure 4b shows the accuracies on MNLI and HANS of the weak learners and main models trained with various weak learners.
Varying the capacity of the weak models affects both in-distribution and out-ofdistribution performance. Out-of-distribution performance of the main model increases as the weak model becomes stronger (more parameters) up to a certain point while in-distribution performances drop slightly at first and then more strongly. When trained jointly with the larger MediumBERT weak learner (41.4 million parameters), the main model gets 97% accuracy on HANS’s heuristic-non-entailed set but a very low accuracy on the in-distribution examples (28% on MNLI and 3% on the heuristic-entailed examples).
As a weak model grows in capacity, it becomes a better learner. The average loss decreases and the model becomes more confident in its predictions. As a result, the group
certain / correct becomes more populated and the main model receives on average a smaller gradient magnitude per input. On the contrary, the certain / incorrect group (which generally align with out-of-distribution samples and induce higher magnitude gradient updates, encouraging generalization at the expense of in-distribution performance) becomes less populated. These results corroborate and complement insights from Yaghoobzadeh et al. (2019). This is also reminiscent of findings from Vodrahalli et al. (2018) and Shrivastava et al. (2016): not all training samples contribute equally towards learning and in some cases, a carefully selected subset of the training set is sufficient to match (or surpass) the performance on the whole set.
5.3 DE-BIASING IS STILL EFFECTIVE WHEN DATASET BIASES ARE UNKNOWN OR HARD TO DETECT
While it is difficult to enumerate all sources of bias, we focus in this work on superficial cues that correlate with the label in the training set but do not transfer. These superficial cues correlate with what can be captured by a weak model. For instance, Conneau et al. (2018) suggest that word presence can be detected with very shallow networks (linear classifier on top of FastText bag of words) as they show very high accuracy for Word Content, the probing task of detecting which of the 1’000 target words is present in a given sentence.
To verify that a weak model is still effective with unknown or hard to detect biases, we consider an example where the bias is only present in a small portion of the training. We remove from the MNLI training set all the examples (192K) that exhibit one of the two biases detailed in Section 4.1: high word overlap between premise and hypothesis with entailment label; and negation in the hypothesis with contradiction label. We are left with 268K training examples.
We apply our de-biasing method with these examples as our training set. For comparison, we train a main model with standard cross-entropy on the same subset of selected examples. Our results are shown in Table 4 and confirm on HANS that our de-biasing method is still effective even when the bias is hard to detect. Note that the accuracies on MNLI can not be directly compared to results in Table 2: the class imbalance in the selected subset of examples lead to a harder optimization problem explaining the difference of performance.
We present complementary analyses in Appendix. To further show the effectiveness of our method, we included in Appendix A.2 an additional experiment on facts verification (Thorne et al., 2018; Schuster et al., 2019). In Appendix A.5, we evaluate the ability of our method to generalize to other domains that do not share the same annotation artifacts. We highlight the trade-off between in-distribution performance and out-of-distribution robustness by quantifying the influence of the multi-loss objective in Appendix A.6 and draw a connection between our 3 groups of examples and recently introduced Data Maps (Swayamdipta et al., 2020).
6 CONCLUSION
We have presented an effective method for training models robust to dataset biases. Leveraging a weak learner with limited capacity and a modified product of experts training setup, we show that dataset biases do not need to be explicitly known or modeled to be able to train models that can generalize significantly better to out-of-distribution examples. We discuss the design choices for such weak learner and investigate how using higher-capacity learners leads to higher out-ofdistribution performance and a trade-off with in-distribution performance. We believe that such approaches capable of automatically identifying and mitigating datasets bias will be essential tools for future bias-discovery and mitigation techniques.
ACKNOWLEDGEMENTS
This research was supported by the ISRAEL SCIENCE FOUNDATION (grant No. 448/20).
A APPENDIX
A.1 EXPERIMENTAL SETUP AND FINE-TUNING HYPER-PARAMETERS
Our code is based on the Hugging Face Transformers library (Wolf et al., 2019). All of our experiments are conducted on single 16GB V100 using half-precision training for speed.
NLI We fine-tuned a pre-trained TinyBERT (Turc et al., 2019) as our weak learner. We use the following hyper-parameters: 3 epochs of training with a learning rate of 3e−5, and a batch size of 32. The learning rate is linearly increased for 2000 warming steps and linearly decreased to 0 afterward. We use an Adam optimizer β = (0.9, 0.999), = 1e−8 and add a weight decay of 0.1. Our robust model is BERT-base-uncased and uses the same hyper-parameters. When we train a robust model with a multi-loss objective, we give the standard CE a weight of 0.3 and the PoE cross-entropy a weight of 1.0. Because of the high variance on HANS (McCoy et al., 2019a), we average numbers on 6 runs with different seeds.
SQuAD We fine-tuned a pre-trained TinyBERT (Turc et al., 2019) as our weak learner. We use the following hyper-parameters: 3 epochs of training with a learning rate of 3e−5, and a batch size of 16. The learning rate is linearly increased for 1500 warming steps and linearly decreased to 0 afterward. We use an Adam optimizer β = (0.9, 0.999), = 1e−8 and add a weight decay of 0.1. Our robust model is BERT-base-uncased and uses the same hyper-parameters. When we train a robust model with a multi-loss objective, we give the standard CE a weight of 0.3 and the PoE cross-entropy a weight of 1.0.
Fact verification We fine-tuned a pre-trained TinyBERT (Turc et al., 2019) as our weak learner. We use the following hyper-parameters: 3 epochs of training with a learning rate of 2e−5, and a batch size of 32. The learning rate is linearly increased for 1000 warming steps and linearly decreased to 0 afterward. We use an Adam optimizer β = (0.9, 0.999), = 1e−8 and add a weight decay of 0.1. Our robust model is BERT-base-uncased and uses the same hyper-parameters. When we train a robust model with a multi-loss objective, we give the standard CE a weight of 0.3 and the PoE cross-entropy a weight of 1.0. We average numbers on 6 runs with different seeds.
A.2 ADDITIONAL EXPERIMENT: FACT VERIFICATION
Following Mahabadi et al. (2020), we also experiment on a fact verification dataset. The FEVER dataset (Thorne et al., 2018) contains claim-evidences pairs generated from Wikipedia. Schuster et al. (2019) collected a new evaluation set for the FEVER dataset to avoid the biases observed in the claims of the benchmark. The authors symmetrically augment the claim-evidences pairs of the FEVER evaluation to balance the detected artifacts such that solely relying on statistical cues in claims would lead to a random guess. The collected dataset is challenging, and the performance of the models relying on biases evaluated on this dataset drops significantly.
Our results are in Table 5. Our method is again effective at removing potential biases present in the training set and shows strong improvements on the symmetric test set.
A.3 SOME EXAMPLES OF DATASET BIASES DETECTED BY THE WEAK LEARNER
In Table 6, we show a few qualitative examples of dataset biases detected by a fine-tuned weak learner.
A.4 DETAILED RESULTS ON HANS AND MNLI MISMATCHED
We report the detailed results per heuristics on HANS in Table 7 and the results on the mismatched hard test set of MNLI in Table 8.
A.5 NLI: TRANSFER EXPERIMENTS
As highlighted in Section 4.3, our method is effective at improving robustness to adversarial settings that specifically target dataset biases. We further evaluate how well our method improves
generalization to domains that do not share the same annotation artifacts. Mahabadi et al. (2020) highlighted that product of experts is effective at improving generalization to other NLI benchmarks when trained on SNLI (Bowman et al., 2015). We follow the same setup: notably, we perform a sweep on the weight of the cross-entropy in our multi-loss objective and perform model selection on the development set of each dataset. We evaluate on SciTail (Khot et al., 2018), GLUE benchmark’s diagnostic test (Wang et al., 2018), AddOneRTE (AddOne) (Pavlick & Callison-Burch, 2016), Definite Pronoun Resolution (DPR) (Rahman & Ng, 2012), FrameNet+ (FN+) (Pavlick et al., 2015) and Semantic Proto-Roles (SPR) (Reisinger et al., 2015). We also evaluate on the hard SNLI test set (Gururangan et al., 2018), which is a set where a hypothesis-only model cannot solve easily.
Table 9 shows the results. Without explicitly modeling the bias in the dataset, our method matches or surpasses the generalization performance previously reported, at the exception of GLUE’s diagnostic dataset and SNLI’s hard test set. Moreover, we notice that the multi-loss objective sometimes leads to a stronger performance, which suggests that in some cases, it can be sub-optimal to completely remove the information picked up by the weak learner. We hypothesize that the multi-loss objective balances the emphasis on domain-specific features (favoring in-distribution performance) and their removal through de-biasing (benefiting domain transfer performance). This might explain why we do not observe improvements on SNLI’s hard test set and GLUE’s diagnostic set in the PoE setting.
A.6 INFLUENCE OF MULTI-LOSS OBJECTIVE
Our best performing setup features a linear combination of the PoE cross-entropy and the standard cross-entropy. We fix the weight of the PoE cross-entropy to 1. and modulate the linear coefficient α of the standard cross-entropy. Figure 5 shows the influence of this multi-loss objective. As the weight of the standard cross-entropy increases, the in-distribution performance increases while the out-of-distribution performance decreases. This effect is particularly noticeable on MNLI/HANS (see Figure 5a). Surprisingly, this trade-off is less pronounced on SQuAD/Adversarial SQuAD: the F1 development score increases from 85.43 for α = 0.1 to 88.14 for α = 1.9 while decreasing from 56.67 to 55.06 on AddSent.
Our multi-loss objective is similar to the annealing mechanism proposed in (Utama et al., 2020). In fact, as the annealing coefficient decreases, the modified probability distribution of the weak model converges to the uniform distribution. As seen in Section 3.2, when the distribution of the weak model is close to the uniform distribution (high-uncertainty), the gradient of the loss for PoE is equivalent to the gradient of the main model trained without PoE (i.e. the standard cross-entropy). In this work, we consider a straight-forward setup where we linearly combine the two losses throughout the training with fixed coefficients.
A.7 CONNECTION TO Data Maps SWAYAMDIPTA ET AL. (2020)
We hypothesize that our three identified groups (certain / incorrect, certain / correct, and uncertain) overlap with the regions identified by data cartographies (Swayamdipta et al., 2020). The authors project each training example onto 2D coordinates: confidence, variability. The first one is the mean of the gold label probabilities predicted for each example across training epochs. The second one is the standard deviation. Confidence is closely related to the loss (intuitively, a high-
confidence example is “easier” to predict). Variability is connected to the uncertainty (the probability of the true class for high-variability examples fluctuates during the training underlying the model’s indecisiveness).
Our most interesting group (certain / incorrect) bears similarity with the ambiguous region (the model is indecisive about these instances frequently changing its prediction across training epochs) and the hard-to-learn region (which contains a significant proportion of mislabeled examples). The authors observe that the examples in these 2 regions play an important role in out-of-distribution generalization. Our findings go in the same direction as the weak model encourages the robust model to pay a closer look at these certain / incorrect examples during training.
To verify this claim, we follow their procedure and log the training dynamics of our weak learner trained on MNLI. Figure 6 (left) shows the Data Map obtained with our weak learner. For each of our 3 groups, we select the 10,000 examples that emphasized the most the characteristic of the group. For instance, for uncertain, we take the 10,000 examples with the highest entropy. In Figure 6 (right) we plot these 30,000 examples onto the Data Map. We observe that our certain / correct examples are in the easy-to-learn region, our certain / incorrect examples are in the hardto-learn region and our uncertain examples are mostly in the ambiguous region.
Conversely, we verify that the examples in the ambiguous region are mostly in our uncertain group, the examples from the hard-to-learn are mostly in uncertain and certain / incorrect, and the examples from the easy-to-learn are mainly in certain / correct and uncertain. | 1. What is the focus of the paper regarding neural NLP models?
2. What are the strengths and weaknesses of the proposed method in addressing the known problem?
3. How does the reviewer assess the novelty and creativity of the approach?
4. What are the additional comments and questions raised by the reviewer regarding the analysis and results? | Review | Review
Summary:
This paper focuses on the known problem that current NLP models tend to solve tasks by exploiting superficial properties of the training data that do not generalize. For example, in the NLI task, models learn that negation words are indicative of the label "contradiction" and high word overlap is indicative of the label "entailment". There have been many recent solutions proposed for mitigating such behavior, but existing methods have tended to assume knowledge of the specific dataset biases a priori. In this paper, the authors propose a method based on product of experts that doesn't assume particular knowledge of specific dataset biases. The method works by first training a weak model and then training a "main" model using a loss that upweights examples on which the weak model performs poorly (namely, predicts the wrong answer with high confidence). The assumption is that weak models will exploit heuristics, and so this method will deincentivize the main model to use those same heuristics. The authors evaluate on a range of tasks, including a simulated bias setting, and NLI setting, and a QA setting, and offer a fair amount of analysis of their results. In particular, the analysis showing that the weak learners do in fact adopt the biases which have been documented elsewhere in the literature is interesting, and the discussion of "how weak does the weak learner need to be" is appreciated (a few questions on this below).
Strengths:
Straightforward method for addressing an important known problem with neural NLP models
Thorough analysis, not just a "method and results" paper
Weaknesses:
Novelty might be somewhat limited, method is not wildly creative (but I don't necessarily think "wild creativity" is a prerequisite for scientific value). The authors do a good job of directly contending with the similar contemporaneous work in their paper
Additional Comments/Questions:
Just a few thoughts that came up while reading...
The weakness-of-weak-learner analysis is interesting. I imagine this is not something that can be understood in absolute terms, i.e., I would not expect there to be some level of weakness that is sufficient for all biases and all datasets. E.g., surely the lexical overlap bias is "harder" to learn than a lexical bias like the presence of negation words, since recognizing lexical overlap presupposes recognizing lexical identity. Therefore, I'd imagine knowing how weak the weak learner needs to be requires some intuition about which biases you are trying to remove, which runs counter to the primary thrust of the paper, namely, removing bias without knowing what the bias is. Thoughts?
Its interesting that even with this the performance on hans non-entailed is still only 56%, which is better but still not exactly good, and doesn't suggest the model has learned the "right" thing so much as its has learned not to use that particular wrong thing. For research questions such as this ("is the model using the heuristic?") I always find it unsatisfying to think about performance gains that are in between 0 and 100. E.g., when we talk about human learning, we usually see an abrupt shift when the learner "gets it", and our hope in removing the spurious features with methods like yours would be that we'd help the neural models similarly "get it" and reach 100% at least on examples that isolate the effect of this spurious feature. I don't expect you to have an answer for this, but just raising to hear your thoughts. |
ICLR | Title
Learning from others' mistakes: Avoiding dataset biases without modeling them
Abstract
State-of-the-art natural language processing (NLP) models often learn to model dataset biases and surface form correlations instead of features that target the intended underlying task. Previous work has demonstrated effective methods to circumvent these issues when knowledge of the bias is available. We consider cases where the bias issues may not be explicitly identified, and show a method for training models that learn to ignore these problematic correlations. Our approach relies on the observation that models with limited capacity primarily learn to exploit biases in the dataset. We can leverage the errors of such limited capacity models to train a more robust model in a product of experts, thus bypassing the need to hand-craft a biased model. We show the effectiveness of this method to retain improvements in out-of-distribution settings even if no particular bias is targeted by the biased model.
1 INTRODUCTION
The natural language processing community has made tremendous progress in using pre-trained language models to improve predictive accuracy (Devlin et al., 2019; Raffel et al., 2019). Models have now surpassed human performance on language understanding benchmarks such as SuperGLUE (Wang et al., 2019). However, studies have shown that these results are partially driven by these models detecting superficial cues that correlate well with labels but which may not be useful for the intended underlying task (Jia & Liang, 2017; Schwartz et al., 2017). This brittleness leads to overestimating model performance on the artificially constructed tasks and poor performance in out-of-distribution or adversarial examples.
A well-studied example of this phenomenon is the natural language inference dataset MNLI (Williams et al., 2018). The generation of this dataset led to spurious surface patterns that correlate noticeably with the labels. Poliak et al. (2018) highlight that negation words (“not”, “no”, etc.) are often associated with the contradiction label. Gururangan et al. (2018), Poliak et al. (2018), and Tsuchiya (2018) show that a model trained solely on the hypothesis, completely ignoring the intended signal, reaches strong performance. We refer to these surface patterns as dataset biases since the conditional distribution of the labels given such biased features is likely to change in examples outside the training data distribution (as formalized by He et al. (2019)).
A major challenge in representation learning for NLP is to produce models that are robust to these dataset biases. Previous work (He et al., 2019; Clark et al., 2019; Mahabadi et al., 2020) has targeted removing dataset biases by explicitly factoring them out of models. These studies explicitly construct a biased model, for instance, a hypothesis-only model for NLI experiments, and use it to improve the robustness of the main model. The core idea is to encourage the main model to find a different explanation where the biased model is wrong. During training, products-of-experts ensembling (Hinton, 2002) is used to factor out the biased model.
While these works show promising results, the assumption of knowledge of the underlying dataset bias is quite restrictive. Finding dataset biases in established datasets is a costly and time-consuming process, and may require access to private details about the annotation procedure, while actively re-
∗Supported by the Viterbi Fellowship in the Center for Computer Engineering at the Technion
ducing surface correlations in the collection process of new datasets is challenging given the number of potential biases (Zellers et al., 2019; Sakaguchi et al., 2020).
In this work, we explore methods for learning from biased datasets which do not require such an explicit formulation of the dataset biases. We first show how a model with limited capacity, which we call a weak learner, trained with a standard cross-entropy loss learns to exploit biases in the dataset. We then investigate the biases on which this weak learner relies and show that they match several previously manually identified biases. Based on this observation, we leverage such limited capacity models in a product of experts ensemble to train a more robust model and evaluate our approach in various settings ranging from toy datasets up to large crowd-sourced benchmarks: controlled synthetic bias setup (He et al., 2019; Clark et al., 2019), natural language inference (McCoy et al., 2019b), extractive question answering (Jia & Liang, 2017) and fact verification Schuster et al. (2019).
Our contributions are the following: (a) we show that weak learners are prone to relying on shallow heuristics and highlight how they rediscover previously human-identified dataset biases; (b) we demonstrate that we do not need to explicitly know or model dataset biases to train more robust models that generalize better to out-of-distribution examples; (c) we discuss the design choices for weak learners and show trade-offs between higher out-of-distribution performance at the expense of the in-distribution performance.
2 RELATED WORK
Many studies have reported dataset biases in various settings. Examples include visual question answering (Jabri et al., 2016; Zhang et al., 2016), story completion (Schwartz et al., 2017), and reading comprehension (Kaushik & Lipton, 2018; Chen et al., 2016). Towards better evaluation methods, researchers have proposed to collect “challenge” datasets that account for surface correlations a model might adopt (Jia & Liang, 2017; McCoy et al., 2019b). Standard models without specific robust training methods often drop in performance when evaluated on these challenge sets.
While these works have focused on data collection, another approach is to develop methods allowing models to ignore dataset biases during training. Several active areas of research tackle this challenge by adversarial training (Belinkov et al., 2019a;b; Stacey et al., 2020), example forgetting (Yaghoobzadeh et al., 2019) and dynamic loss adjustment (Cadène et al., 2019). Previous work (He et al., 2019; Clark et al., 2019; Mahabadi et al., 2020) has shown the effectiveness of product of experts to train un-biased models. In our work, we show that we do not need to explicitly model biases to apply these de-biasing methods and can use a more general setup than previously presented.
Orthogonal to these evaluation and optimization efforts, data augmentation has attracted interest as a way to reduce model biases by explicitly modifying the dataset distribution (Min et al., 2020; Belinkov & Bisk, 2018), either by leveraging human knowledge about dataset biases such as swapping male and female entities (Zhao et al., 2018) or by developing dynamic data collection and benchmarking (Nie et al., 2020). Our work is mostly orthogonal to these efforts and alleviates the need for a human-in-the-loop setup which is common to such data-augmentation approaches.
Large pre-trained language models have contributed to improved out-of-distribution generalization (Hendrycks et al., 2020). However, in practice, that remains a challenge in natural language processing (Linzen, 2020; Yogatama et al., 2019) and our work aims at out-of-distribution robustness without significantly compromising in-distribution performance.
Finally, in parallel the work of Utama et al. (2020) presents a related de-biasing method leveraging the mistakes of weakened models without the need to explicitly model dataset biases. Our approach is different in several ways, in particular we advocate for using limited capacity weak learner while Utama et al. (2020) uses the same architecture as the robust model trained on a few thousands examples. We investigated the trade-off between learner’s capacity and resulting performances as well as the resulting few-shot learning regime in the limit of a high capacity weak model.
3 METHOD
3.1 OVERVIEW
Our approach utilizes product of experts (Hinton, 2002) to factor dataset biases out of a learned model. We have access to a training set (xi, yi)1≤i≤N where each example xi has a label yi among K classes. We use two models fW (weak) and fM (main) which produce respective logits vectors w and m ∈ RK . The product of experts ensemble of fW and fM produces logits vector e
∀1 ≤ j ≤ K, ej = wj +mj (1) Equivalently, we have softmax(e) ∝ softmax(w) softmax(m) where is the element-wise multiplication.
Our training approach can be decomposed in two successive stages: (a) training the weak learner fW with a standard cross-entropy loss (CE) and (b) training a main (robust) model fM via product of experts (PoE) to learn from the errors of the weak learner. The core intuition of this method is to encourage the robust model to learn to make predictions that take into account the weak learner’s mistakes.
We do not make any assumption on the biases present (or not) in the dataset and rely on letting the weak learner discover them during training. Moreover, in contrast to prior work (Mahabadi et al., 2020; He et al., 2019; Clark et al., 2019) in which the weak learner had a hand-engineered bias-specific structure, our approach does not make any specific assumption on the weak learner such as its architecture, capacity, pre-training, etc. The weak learner fW is trained with standard cross-entropy.
The final goal is producing main model fM . After training, the weak model fW is frozen and used only as part of the product of experts. Since the weak model is frozen, only the main model fM receives gradient updates during training. This is similar to He et al. (2019); Clark et al. (2019) but differs from Mahabadi et al. (2020) who train both weak and main models jointly. For convenience, we refer to the cross-entropy of the prediction e of Equation 1 as the PoE cross-entropy.
3.2 ANALYSIS: THE ROBUST MODEL LEARNS FROM THE ERRORS OF THE WEAK LEARNER
To better explore the impact of PoE training with a weak learner, we consider the special case of binary classification with logistic regression. Here w and m are scalar logits and the softmax becomes a sigmoid. The loss of the product of experts for a single positive example is:
LPoE,binary = −m− w + log ( 1 + exp(m+ w) ) (2)
Logit w is a fixed value since the weak learner is frozen. We also define the entropy of the weak learner asHw = −p log(p)− (1− p) log(1− p) where p = σ(w) as our measure of certainty. Different values of w from the weak learner induce different gradient updates in the main model. Figure 1a shows the gradient update of the main model logitm. Each of the three curves corresponds to a different value of w the weak model.
• Weak Model is Certain / Incorrect: the first case (in blue) corresponds to low values of w. The entropy is low and the loss of the weak model is high. The main model receives gradients even when it is classifying the point correctly (≈ m = 5) which encourages m to compensate for the weak model’s mistake.
• Weak Model is Uncertain: the second case (in red) corresponds tow = 0 which means the weak model’s entropy is high (uniform probability over all classes). In this case, product of experts is equal to the main model, and the gradient is equal to the one obtained with cross-entropy.
• Weak Model is Certain / Correct: the third case (in green) corresponds to high values of w. The entropy is low and the loss of the weak model is low. In this case, m’s gradients are “cropped” early on and the main model receives less gradients on average. When w is extremely high, m receives no gradient (and the current example is simply ignored).
Put another way, the logit values for whichm receives gradients are shifted according the correctness and certainty of the weak model. Figure 1b shows the concentration of training examples of MNLI
(Williams et al., 2018) projected on the 2D coordinates (correctness, certainty) from a trained weak learner (described in Section 4.1). We observe that there are many examples for the 3 cases. More crucially, we verify that the group certain / incorrect is not empty since the examples in this group encourage the model to not rely on the dataset biases.
(a) Gradient update of m for different values of w on binary classification. (b) 2D projection of MNLI examples from a trained weak learner. Colors indicate the concentration and are in log scale.
Figure 1: The analysis of the gradients reveals 3 regimes where the gradient is shifted by the certainty and correctness of the weak learner. These 3 regions are present in real dataset such as MNLI.
3.3 CONNECTION TO DISTILLATION
Our product of experts setup bears similarities with knowledge distillation (Hinton et al., 2015) where a student network is trained to mimic the behavior of a frozen teacher model.
In our PoE training, we encourage the main model fM (analog to the student network) to learn from the errors of the weak model fW (analog to the teacher network): instead of mimicking, it learns an orthogonal behavior when the teacher is incorrect. To recognize errors from the weak learner, we use the gold labels which alleviates the need to use pseudo-labelling or data augmentation as it is commonly done in distillation setups (Furlanello et al., 2018; Xie et al., 2020).
Similarly to Hinton et al. (2015), our final loss is a linear combination of the original cross-entropy loss (CE) and the PoE cross-entropy. We refer to this multi-loss objective as PoE + CE.
4 EXPERIMENTS
We consider several different experimental settings that explore the use of a weak learner to isolate and train against dataset biases. All the experiments are conducted on English datasets, and follow the standard setup for BERT training. Our main model is BERT-base (Devlin et al., 2019) with 110M parameters. Except when indicated otherwise, our weak learner is a significantly smaller pre-trained masked language model known as TinyBERT (Turc et al., 2019) with 4M parameters (2 layers, hidden size of 128). The weak learner is fine-tuned on exactly the same data as our main model. For instance, when trained on MNLI, it gets a 67% accuracy on the matched development set (compared to 84% for BERT-base).
Part of our discussion relies on natural language inference, which has been widely studied in this area. The classification task is to determine whether a hypothesis statement is true (entailment), false (contradiction) or undetermined (neural) given a premise statement. MNLI (Williams et al., 2018) is the canonical large-scale English dataset to study this problem with 433K labeled examples. For evaluation, it features matched sets (examples from domains encountered in training) and mismatched sets (domains not-seen during training).
Experiments first examine qualitatively the spurious correlations picked up by the method. We then verify the validity of the method on a synthetic experimental setup. Finally, we verify the impact of our method by evaluating robust models on several out-of-distribution sets and discuss the choice of the weak learner.
4.1 WEAK LEARNERS REDISCOVER PREVIOUSLY REPORTED DATASET BIASES
Most approaches for circumventing dataset bias require modeling the bias explicitly, for example using a model limited to only the hypothesis in NLI (Gururangan et al., 2018). These approaches are effective, but require isolating specific biases present in a dataset. Since this process is costly, time consuming and error-prone, it is unrealistic to expect such analysis for all new datasets. On the contrary, we hypothesize that weak learners might operate like rapid surface learners (Zellers et al., 2019), picking up on dataset biases without specific signal or input cura-
tion and being rather certain of their biased errors (high certainty on the biased prediction errors).
We first investigate whether our weak learner re-discovers two well-known dataset biases reported on NLI benchmarks: (a) the presence of negative word in the hypothesis is highly correlated with the contradiction label (Poliak et al., 2018; Gururangan et al., 2018), (b) high word overlap between the premise and the hypothesis is highly correlated with the entailment label (McCoy et al., 2019b).
To this aim, we fine-tune a weak learner on MNLI (Williams et al., 2018). Hyper-parameters can be found in Appendix A.1. We extract and manually categorize 1,000 training examples wrongly predicted by the weak learner (with a high loss and a high certainty). Table 1 breaks them down per category. Half of these incorrect examples are wrongly predicted as Contradiction and almost all of these contain a negation1 in the hypothesis. Another half of the examples are incorrectly predicted as Entailment, a majority of these presenting a high lexical overlap between the premise and the hypothesis (5 or more words in common). The weak learner thus appears to predict with highcertainty a Contradiction label whenever the hypothesis contains a negative word and with highcertainty an Entailment label whenever there is a strong lexical overlap between premise/hypothesis. Table 6 in Appendix A.3 presents qualitative examples of dataset biases identified by the fine-tuned weak learner.
This analysis is based on a set of biases referenced in the literature and does not exclude the possibility of other biases being detected by the weak learner. For instance, during this investigation we notice that the presence of “negative sentiment” words (for instance: dull, boring) in the hypothesis appears to be often indicative of a Contradiction prediction. We leave further investigation on such behaviors to future work.
4.2 SYNTHETIC EXPERIMENT: CHEATING FEATURE
We consider a controlled synthetic experiment described in He et al. (2019); Clark et al. (2019) that simulates bias. We modify 20,000 MNLI training examples by injecting a cheating feature which encodes an example’s label with probability pcheat and a random label selected among the two incorrect labels otherwise. For simplicity, we consider the first 20,000 examples. On the evaluation sets, the cheating feature is random and does not convey any useful information. In the present experiment, the cheating feature takes the form of a prefix added to the hypothesis (“0” for Contradiction, “1” for Entailment, “2” for Neutral). We train the weak and main models on these 20,000 examples and evaluate their accuracy on the matched development set.2 We expect a biased model to rely mostly on the cheating feature thereby leading to poor evaluation performance.
Figure 2 shows the results. As the proportion of examples containing the bias increases, the evaluation accuracy of the weak learner quickly decreases to reach 0% when pcheat = 0.9. The weak
1We use the following list of negation words: no, not, none, nothing, never, aren’t, isn’t, weren’t, neither, don’t, didn’t, doesn’t, cannot, hasn’t, won’t.
2We observe similar trends on the mismatched development set.
learner detects the cheating feature during training and is mainly relying on the synthetic bias which is not directly indicative of the gold label.
Both He et al. (2019) and Clark et al. (2019) protect against the reliance on this cheating feature by ensembling the main model with a biased model that only uses the hypothesis (or its first token). We instead train the main model in the product of experts setting, relying on the weak learner to identify the bias. Figure 2 shows that when a majority of the training examples contain the bias (pcheat ≥ 0.6), the performance of the model trained with crossentropy drops faster than the one trained in PoE. PoE training leads to a more robust model by encouraging it to learn from the mistakes of the weak learner. As pcheat comes close to 1, the model’s training enters a “few-shot regime” where there are very few incorrectly predicted biased examples to learn from (examples where following the biased heuristic lead to a wrong answer) and the performance of the model trained with PoE drops as well.
4.3 ADVERSARIAL DATASETS: NLI AND QA
NLI The HANS adversarial dataset (McCoy et al., 2019b) was constructed by writing templates to generate examples with a high premise/hypothesis word overlap to attack models that rely on this bias. In one template the word overlap generates entailed premise/hypothesis pairs (heuristic-entailed examples), whereas in another the examples contradict the heuristic (nonheuristic-entailed). The dataset contains 30K evaluation examples equally split between both.
Table 2 shows that the weak learner exhibits medium performance on the in-distribution sets (MNLI) and that on out-of-distribution evaluation (HANS), it relies heavily on the word overlap heuristic. Product of experts training is effective at reducing the reliance on biases and leads to significant gains on the heuristic-non-entailed examples when compared to a model trained with standard crossentropy leading to an improvement of +24%.
The small degradation on in-distribution data is likely because product of experts training does not specialize for in-distribution performance but focuses on the weak model errors (He et al., 2019). The linear combination of the original cross-entropy loss and the product of experts loss (PoE + CE) aims at counteracting this effect. This multi-loss objective trades off out-of-distribution generalization for in-distribution accuracy. A similar trade-off between accuracy and robustness has been reported in adversarial training (Zhang et al., 2019; Tsipras et al., 2019). In Appendix A.6, we detail the influence of this multi-loss objective.
We also evaluate our method on MNLI’s hard test set (Gururangan et al., 2018) which is expected to be less biased than MNLI’s standard split. These examples are selected such that a hypothesis-only model cannot predict the label accurately. Table 2 shows the results of this experiment. Our method surpasses the performance of a PoE model trained with a hypothesis-only biased learner. Results on the mismatched set are given in Appendix A.4.
QA Question answering models often rely on heuristics such as type and keyword-matching (Weissenborn et al., 2017) that can do well on benchmarks like SQuAD (Rajpurkar et al., 2016). We evaluate on the Adversarial SQuAD dataset (Jia & Liang, 2017) built by appending distractor sentences to the passages in the original SQuAD. Distractors are constructed such that they look like a plausible answer to the question while not changing the correct answer or misleading humans.
Results on SQuAD v1.1 and Adversarial SQuAD are listed in Table 3. The weak learner alone has low performance both on in-distribution and adversarial sets. PoE training improves the adversarial performance (+1% on AddSent) while sacrificing some in-distribution performance. A multi-loss
optimization closes the gap and even boosts adversarial robustness (+3% on AddSent and +2% on AddOneSent). In contrast to our experiments on MNLI/HANS, multi-loss training thus leads here to better performance on out-of-distribution as well. We hypothesize that in this dataset, the weak learner picks up more useful information and removing it entirely might be non-optimal. Multi-loss in this case allows us to strike a balance between learning from, or removing, the weak learner.
5 ANALYSIS
5.1 REDUCING BIAS: CORRELATION ANALYSIS
To investigate the behavior of the ensemble of the weak and main learner, we compute the Pearson correlation between the element-wise loss of the weak (biased) learner and the loss of the trained models following Mahabadi et al. (2020). A correlation of 1 indicates a linear relation between the per-example losses (the two learners make the same mistakes), and 0 indicates the absence of linear correlation (models’ mistakes are uncorrelated). Figure 3 shows that models trained with a linear combination of the PoE cross-entropy and the standard cross-entropy have a higher correlation than when trained solely with PoE. This confirms that PoE training is effective at reducing biases uncovered by the weak learner and re-emphasizes that adding standard cross-entropy leads to a trade-off between the two.
5.2 HOW WEAK DO THE WEAK LEARNERS NEED TO BE?
We consider parameter size as a measure of the capacity or ”weakness” of the weak learner. We fine-tune different sizes of BERT (Turc et al., 2019) ranging from 4.4 to 41.4 million parameters and use these as weak models in a PoE setting. Figure 4b shows the accuracies on MNLI and HANS of the weak learners and main models trained with various weak learners.
Varying the capacity of the weak models affects both in-distribution and out-ofdistribution performance. Out-of-distribution performance of the main model increases as the weak model becomes stronger (more parameters) up to a certain point while in-distribution performances drop slightly at first and then more strongly. When trained jointly with the larger MediumBERT weak learner (41.4 million parameters), the main model gets 97% accuracy on HANS’s heuristic-non-entailed set but a very low accuracy on the in-distribution examples (28% on MNLI and 3% on the heuristic-entailed examples).
As a weak model grows in capacity, it becomes a better learner. The average loss decreases and the model becomes more confident in its predictions. As a result, the group
certain / correct becomes more populated and the main model receives on average a smaller gradient magnitude per input. On the contrary, the certain / incorrect group (which generally align with out-of-distribution samples and induce higher magnitude gradient updates, encouraging generalization at the expense of in-distribution performance) becomes less populated. These results corroborate and complement insights from Yaghoobzadeh et al. (2019). This is also reminiscent of findings from Vodrahalli et al. (2018) and Shrivastava et al. (2016): not all training samples contribute equally towards learning and in some cases, a carefully selected subset of the training set is sufficient to match (or surpass) the performance on the whole set.
5.3 DE-BIASING IS STILL EFFECTIVE WHEN DATASET BIASES ARE UNKNOWN OR HARD TO DETECT
While it is difficult to enumerate all sources of bias, we focus in this work on superficial cues that correlate with the label in the training set but do not transfer. These superficial cues correlate with what can be captured by a weak model. For instance, Conneau et al. (2018) suggest that word presence can be detected with very shallow networks (linear classifier on top of FastText bag of words) as they show very high accuracy for Word Content, the probing task of detecting which of the 1’000 target words is present in a given sentence.
To verify that a weak model is still effective with unknown or hard to detect biases, we consider an example where the bias is only present in a small portion of the training. We remove from the MNLI training set all the examples (192K) that exhibit one of the two biases detailed in Section 4.1: high word overlap between premise and hypothesis with entailment label; and negation in the hypothesis with contradiction label. We are left with 268K training examples.
We apply our de-biasing method with these examples as our training set. For comparison, we train a main model with standard cross-entropy on the same subset of selected examples. Our results are shown in Table 4 and confirm on HANS that our de-biasing method is still effective even when the bias is hard to detect. Note that the accuracies on MNLI can not be directly compared to results in Table 2: the class imbalance in the selected subset of examples lead to a harder optimization problem explaining the difference of performance.
We present complementary analyses in Appendix. To further show the effectiveness of our method, we included in Appendix A.2 an additional experiment on facts verification (Thorne et al., 2018; Schuster et al., 2019). In Appendix A.5, we evaluate the ability of our method to generalize to other domains that do not share the same annotation artifacts. We highlight the trade-off between in-distribution performance and out-of-distribution robustness by quantifying the influence of the multi-loss objective in Appendix A.6 and draw a connection between our 3 groups of examples and recently introduced Data Maps (Swayamdipta et al., 2020).
6 CONCLUSION
We have presented an effective method for training models robust to dataset biases. Leveraging a weak learner with limited capacity and a modified product of experts training setup, we show that dataset biases do not need to be explicitly known or modeled to be able to train models that can generalize significantly better to out-of-distribution examples. We discuss the design choices for such weak learner and investigate how using higher-capacity learners leads to higher out-ofdistribution performance and a trade-off with in-distribution performance. We believe that such approaches capable of automatically identifying and mitigating datasets bias will be essential tools for future bias-discovery and mitigation techniques.
ACKNOWLEDGEMENTS
This research was supported by the ISRAEL SCIENCE FOUNDATION (grant No. 448/20).
A APPENDIX
A.1 EXPERIMENTAL SETUP AND FINE-TUNING HYPER-PARAMETERS
Our code is based on the Hugging Face Transformers library (Wolf et al., 2019). All of our experiments are conducted on single 16GB V100 using half-precision training for speed.
NLI We fine-tuned a pre-trained TinyBERT (Turc et al., 2019) as our weak learner. We use the following hyper-parameters: 3 epochs of training with a learning rate of 3e−5, and a batch size of 32. The learning rate is linearly increased for 2000 warming steps and linearly decreased to 0 afterward. We use an Adam optimizer β = (0.9, 0.999), = 1e−8 and add a weight decay of 0.1. Our robust model is BERT-base-uncased and uses the same hyper-parameters. When we train a robust model with a multi-loss objective, we give the standard CE a weight of 0.3 and the PoE cross-entropy a weight of 1.0. Because of the high variance on HANS (McCoy et al., 2019a), we average numbers on 6 runs with different seeds.
SQuAD We fine-tuned a pre-trained TinyBERT (Turc et al., 2019) as our weak learner. We use the following hyper-parameters: 3 epochs of training with a learning rate of 3e−5, and a batch size of 16. The learning rate is linearly increased for 1500 warming steps and linearly decreased to 0 afterward. We use an Adam optimizer β = (0.9, 0.999), = 1e−8 and add a weight decay of 0.1. Our robust model is BERT-base-uncased and uses the same hyper-parameters. When we train a robust model with a multi-loss objective, we give the standard CE a weight of 0.3 and the PoE cross-entropy a weight of 1.0.
Fact verification We fine-tuned a pre-trained TinyBERT (Turc et al., 2019) as our weak learner. We use the following hyper-parameters: 3 epochs of training with a learning rate of 2e−5, and a batch size of 32. The learning rate is linearly increased for 1000 warming steps and linearly decreased to 0 afterward. We use an Adam optimizer β = (0.9, 0.999), = 1e−8 and add a weight decay of 0.1. Our robust model is BERT-base-uncased and uses the same hyper-parameters. When we train a robust model with a multi-loss objective, we give the standard CE a weight of 0.3 and the PoE cross-entropy a weight of 1.0. We average numbers on 6 runs with different seeds.
A.2 ADDITIONAL EXPERIMENT: FACT VERIFICATION
Following Mahabadi et al. (2020), we also experiment on a fact verification dataset. The FEVER dataset (Thorne et al., 2018) contains claim-evidences pairs generated from Wikipedia. Schuster et al. (2019) collected a new evaluation set for the FEVER dataset to avoid the biases observed in the claims of the benchmark. The authors symmetrically augment the claim-evidences pairs of the FEVER evaluation to balance the detected artifacts such that solely relying on statistical cues in claims would lead to a random guess. The collected dataset is challenging, and the performance of the models relying on biases evaluated on this dataset drops significantly.
Our results are in Table 5. Our method is again effective at removing potential biases present in the training set and shows strong improvements on the symmetric test set.
A.3 SOME EXAMPLES OF DATASET BIASES DETECTED BY THE WEAK LEARNER
In Table 6, we show a few qualitative examples of dataset biases detected by a fine-tuned weak learner.
A.4 DETAILED RESULTS ON HANS AND MNLI MISMATCHED
We report the detailed results per heuristics on HANS in Table 7 and the results on the mismatched hard test set of MNLI in Table 8.
A.5 NLI: TRANSFER EXPERIMENTS
As highlighted in Section 4.3, our method is effective at improving robustness to adversarial settings that specifically target dataset biases. We further evaluate how well our method improves
generalization to domains that do not share the same annotation artifacts. Mahabadi et al. (2020) highlighted that product of experts is effective at improving generalization to other NLI benchmarks when trained on SNLI (Bowman et al., 2015). We follow the same setup: notably, we perform a sweep on the weight of the cross-entropy in our multi-loss objective and perform model selection on the development set of each dataset. We evaluate on SciTail (Khot et al., 2018), GLUE benchmark’s diagnostic test (Wang et al., 2018), AddOneRTE (AddOne) (Pavlick & Callison-Burch, 2016), Definite Pronoun Resolution (DPR) (Rahman & Ng, 2012), FrameNet+ (FN+) (Pavlick et al., 2015) and Semantic Proto-Roles (SPR) (Reisinger et al., 2015). We also evaluate on the hard SNLI test set (Gururangan et al., 2018), which is a set where a hypothesis-only model cannot solve easily.
Table 9 shows the results. Without explicitly modeling the bias in the dataset, our method matches or surpasses the generalization performance previously reported, at the exception of GLUE’s diagnostic dataset and SNLI’s hard test set. Moreover, we notice that the multi-loss objective sometimes leads to a stronger performance, which suggests that in some cases, it can be sub-optimal to completely remove the information picked up by the weak learner. We hypothesize that the multi-loss objective balances the emphasis on domain-specific features (favoring in-distribution performance) and their removal through de-biasing (benefiting domain transfer performance). This might explain why we do not observe improvements on SNLI’s hard test set and GLUE’s diagnostic set in the PoE setting.
A.6 INFLUENCE OF MULTI-LOSS OBJECTIVE
Our best performing setup features a linear combination of the PoE cross-entropy and the standard cross-entropy. We fix the weight of the PoE cross-entropy to 1. and modulate the linear coefficient α of the standard cross-entropy. Figure 5 shows the influence of this multi-loss objective. As the weight of the standard cross-entropy increases, the in-distribution performance increases while the out-of-distribution performance decreases. This effect is particularly noticeable on MNLI/HANS (see Figure 5a). Surprisingly, this trade-off is less pronounced on SQuAD/Adversarial SQuAD: the F1 development score increases from 85.43 for α = 0.1 to 88.14 for α = 1.9 while decreasing from 56.67 to 55.06 on AddSent.
Our multi-loss objective is similar to the annealing mechanism proposed in (Utama et al., 2020). In fact, as the annealing coefficient decreases, the modified probability distribution of the weak model converges to the uniform distribution. As seen in Section 3.2, when the distribution of the weak model is close to the uniform distribution (high-uncertainty), the gradient of the loss for PoE is equivalent to the gradient of the main model trained without PoE (i.e. the standard cross-entropy). In this work, we consider a straight-forward setup where we linearly combine the two losses throughout the training with fixed coefficients.
A.7 CONNECTION TO Data Maps SWAYAMDIPTA ET AL. (2020)
We hypothesize that our three identified groups (certain / incorrect, certain / correct, and uncertain) overlap with the regions identified by data cartographies (Swayamdipta et al., 2020). The authors project each training example onto 2D coordinates: confidence, variability. The first one is the mean of the gold label probabilities predicted for each example across training epochs. The second one is the standard deviation. Confidence is closely related to the loss (intuitively, a high-
confidence example is “easier” to predict). Variability is connected to the uncertainty (the probability of the true class for high-variability examples fluctuates during the training underlying the model’s indecisiveness).
Our most interesting group (certain / incorrect) bears similarity with the ambiguous region (the model is indecisive about these instances frequently changing its prediction across training epochs) and the hard-to-learn region (which contains a significant proportion of mislabeled examples). The authors observe that the examples in these 2 regions play an important role in out-of-distribution generalization. Our findings go in the same direction as the weak model encourages the robust model to pay a closer look at these certain / incorrect examples during training.
To verify this claim, we follow their procedure and log the training dynamics of our weak learner trained on MNLI. Figure 6 (left) shows the Data Map obtained with our weak learner. For each of our 3 groups, we select the 10,000 examples that emphasized the most the characteristic of the group. For instance, for uncertain, we take the 10,000 examples with the highest entropy. In Figure 6 (right) we plot these 30,000 examples onto the Data Map. We observe that our certain / correct examples are in the easy-to-learn region, our certain / incorrect examples are in the hardto-learn region and our uncertain examples are mostly in the ambiguous region.
Conversely, we verify that the examples in the ambiguous region are mostly in our uncertain group, the examples from the hard-to-learn are mostly in uncertain and certain / incorrect, and the examples from the easy-to-learn are mainly in certain / correct and uncertain. | 1. What is the research problem addressed in the paper, and what is the proposed solution?
2. What are the strengths and weaknesses of the proposed method, particularly regarding its novelty and ability to detect data biases?
3. Are there any concerns regarding the focus on natural language processing, and could the method be applied to other fields?
4. What are some suggestions for improving the writing in sections 3, 4, and 5, especially regarding jargon and clarity?
5. What are the limitations of the proposed approach, and how can they be addressed? | Review | Review
Reason for score
The research problem is critical. The solution is appropriate and novel. The claims are validated. The experiments are interesting. However, the writing in section 3, 4 et 5 should be improved. If so, I would be willing to raise my score.
My background
My research is focused on detecting and avoiding data biases (or spurious correlations) learned by deep neural networks. This is the exact scope of this paper. However, my area of expertise is computer vision and multimodal text-image, not natural language processing.
Summary
Context: The paper focuses on automatically detecting data biases learned by natural language processing models and overcoming them using a learning strategy.
Problem: The authors identify and tackle issues of state-of-the-art methods:
they are required to already know about a certain bias to be overcome.
Solution and novelty: The proposed method consists in 1) training a weak model that aims at detecting biases 2) overcoming these biases by training a main model using a product of experts (Hinton, 2002) with the predictions of the fixed weak model.
Claim:
A weak model can be used to discover data biases
The proposed method produces a main model that generalize better to out-of-distribution examples
What I liked the most
meta-problem of automatically detecting and overcoming biases in neural networks is critical
well contextualized
relevant issues of state of the art have been identified
intro and related work are easy to read and understand
novel, simple and interesting method to tackle them
interesting figures
experiments are interesting and well chosen
What could be improved
Abstract, introduction and 2. Related work
Your research problem and solution are general and can be applied to many fields. Is there a specific reason why you decided to focus on NLP only?
You could improve the impact of your approach by citing papers that tackle the same problem with similar solutions from different fields. "Clark et al. 2019 Don’t take the easy way out: Ensemble-based methods for avoiding known dataset biases" that you already cite ran some experiments in multiple fields (NLP, VQA, etc.). "Cadene et al. Rubi: Reducing unimodal biases for visual question answering (NeurIPS2019)" in VQA could also be cited.
Proposed Method
Next to Eq1: Why an element wise sum is equivalent to an element wise multiplication after softmax? It seems wrong to me.
It could be useful to have a general definition of the PoE loss (instead of just an example of binary cross entropy in Eq2)
See 4.3, you should define PoE+CE here.
Experiments
Overall, I think it is important that you improve the writing for this section and reduce jargon. It is really difficult to understand for readers that are not familiar with the datasets on which you perform your study. Also it is really difficult to understand which dataset is "in-distribution" or "out-of-distribution".
You don't define "development matched accuracy" before using it. 4.1
You use too many footnotes that could be included in the text. 4.2
You don't define "CE" (even in the caption of Figure2).
In Table 2, you could reduce jargon by using Weak and Main instead of "W" and "M".
In Table 2, you don't define "An." even in the caption. 4.3
I don't understand why "PoE+CE" is better on "Hard"
I don't like that you propose to use "PoE+CE" as your method of choice "to counteract these effects" without defining it in section 3. To be clear, I still don't understand what is the learning method that you propose PoE or PoE+CE?
Analysis 5.2
Title is on two lines instead of one
I don't understand "When trained jointly with the larger MediumBERT weak learner......" How many parameters? Don't expect your reader to look at Figure 4 to obtain this information.
Conclusion
Could you add a discussion about the limitations of your approach. In particular: How to choose the number of parameters of your weak learner? What to choose between PoE and PoE+CE? And most critically, if you don't assess the type of biases and the amount of biases included in the dataset, how to be sure that your method will have a beneficial impact? Then, if you need to assess the type of biases, using another method that specifically targets them could be more efficient. |
ICLR | Title
Learning from others' mistakes: Avoiding dataset biases without modeling them
Abstract
State-of-the-art natural language processing (NLP) models often learn to model dataset biases and surface form correlations instead of features that target the intended underlying task. Previous work has demonstrated effective methods to circumvent these issues when knowledge of the bias is available. We consider cases where the bias issues may not be explicitly identified, and show a method for training models that learn to ignore these problematic correlations. Our approach relies on the observation that models with limited capacity primarily learn to exploit biases in the dataset. We can leverage the errors of such limited capacity models to train a more robust model in a product of experts, thus bypassing the need to hand-craft a biased model. We show the effectiveness of this method to retain improvements in out-of-distribution settings even if no particular bias is targeted by the biased model.
1 INTRODUCTION
The natural language processing community has made tremendous progress in using pre-trained language models to improve predictive accuracy (Devlin et al., 2019; Raffel et al., 2019). Models have now surpassed human performance on language understanding benchmarks such as SuperGLUE (Wang et al., 2019). However, studies have shown that these results are partially driven by these models detecting superficial cues that correlate well with labels but which may not be useful for the intended underlying task (Jia & Liang, 2017; Schwartz et al., 2017). This brittleness leads to overestimating model performance on the artificially constructed tasks and poor performance in out-of-distribution or adversarial examples.
A well-studied example of this phenomenon is the natural language inference dataset MNLI (Williams et al., 2018). The generation of this dataset led to spurious surface patterns that correlate noticeably with the labels. Poliak et al. (2018) highlight that negation words (“not”, “no”, etc.) are often associated with the contradiction label. Gururangan et al. (2018), Poliak et al. (2018), and Tsuchiya (2018) show that a model trained solely on the hypothesis, completely ignoring the intended signal, reaches strong performance. We refer to these surface patterns as dataset biases since the conditional distribution of the labels given such biased features is likely to change in examples outside the training data distribution (as formalized by He et al. (2019)).
A major challenge in representation learning for NLP is to produce models that are robust to these dataset biases. Previous work (He et al., 2019; Clark et al., 2019; Mahabadi et al., 2020) has targeted removing dataset biases by explicitly factoring them out of models. These studies explicitly construct a biased model, for instance, a hypothesis-only model for NLI experiments, and use it to improve the robustness of the main model. The core idea is to encourage the main model to find a different explanation where the biased model is wrong. During training, products-of-experts ensembling (Hinton, 2002) is used to factor out the biased model.
While these works show promising results, the assumption of knowledge of the underlying dataset bias is quite restrictive. Finding dataset biases in established datasets is a costly and time-consuming process, and may require access to private details about the annotation procedure, while actively re-
∗Supported by the Viterbi Fellowship in the Center for Computer Engineering at the Technion
ducing surface correlations in the collection process of new datasets is challenging given the number of potential biases (Zellers et al., 2019; Sakaguchi et al., 2020).
In this work, we explore methods for learning from biased datasets which do not require such an explicit formulation of the dataset biases. We first show how a model with limited capacity, which we call a weak learner, trained with a standard cross-entropy loss learns to exploit biases in the dataset. We then investigate the biases on which this weak learner relies and show that they match several previously manually identified biases. Based on this observation, we leverage such limited capacity models in a product of experts ensemble to train a more robust model and evaluate our approach in various settings ranging from toy datasets up to large crowd-sourced benchmarks: controlled synthetic bias setup (He et al., 2019; Clark et al., 2019), natural language inference (McCoy et al., 2019b), extractive question answering (Jia & Liang, 2017) and fact verification Schuster et al. (2019).
Our contributions are the following: (a) we show that weak learners are prone to relying on shallow heuristics and highlight how they rediscover previously human-identified dataset biases; (b) we demonstrate that we do not need to explicitly know or model dataset biases to train more robust models that generalize better to out-of-distribution examples; (c) we discuss the design choices for weak learners and show trade-offs between higher out-of-distribution performance at the expense of the in-distribution performance.
2 RELATED WORK
Many studies have reported dataset biases in various settings. Examples include visual question answering (Jabri et al., 2016; Zhang et al., 2016), story completion (Schwartz et al., 2017), and reading comprehension (Kaushik & Lipton, 2018; Chen et al., 2016). Towards better evaluation methods, researchers have proposed to collect “challenge” datasets that account for surface correlations a model might adopt (Jia & Liang, 2017; McCoy et al., 2019b). Standard models without specific robust training methods often drop in performance when evaluated on these challenge sets.
While these works have focused on data collection, another approach is to develop methods allowing models to ignore dataset biases during training. Several active areas of research tackle this challenge by adversarial training (Belinkov et al., 2019a;b; Stacey et al., 2020), example forgetting (Yaghoobzadeh et al., 2019) and dynamic loss adjustment (Cadène et al., 2019). Previous work (He et al., 2019; Clark et al., 2019; Mahabadi et al., 2020) has shown the effectiveness of product of experts to train un-biased models. In our work, we show that we do not need to explicitly model biases to apply these de-biasing methods and can use a more general setup than previously presented.
Orthogonal to these evaluation and optimization efforts, data augmentation has attracted interest as a way to reduce model biases by explicitly modifying the dataset distribution (Min et al., 2020; Belinkov & Bisk, 2018), either by leveraging human knowledge about dataset biases such as swapping male and female entities (Zhao et al., 2018) or by developing dynamic data collection and benchmarking (Nie et al., 2020). Our work is mostly orthogonal to these efforts and alleviates the need for a human-in-the-loop setup which is common to such data-augmentation approaches.
Large pre-trained language models have contributed to improved out-of-distribution generalization (Hendrycks et al., 2020). However, in practice, that remains a challenge in natural language processing (Linzen, 2020; Yogatama et al., 2019) and our work aims at out-of-distribution robustness without significantly compromising in-distribution performance.
Finally, in parallel the work of Utama et al. (2020) presents a related de-biasing method leveraging the mistakes of weakened models without the need to explicitly model dataset biases. Our approach is different in several ways, in particular we advocate for using limited capacity weak learner while Utama et al. (2020) uses the same architecture as the robust model trained on a few thousands examples. We investigated the trade-off between learner’s capacity and resulting performances as well as the resulting few-shot learning regime in the limit of a high capacity weak model.
3 METHOD
3.1 OVERVIEW
Our approach utilizes product of experts (Hinton, 2002) to factor dataset biases out of a learned model. We have access to a training set (xi, yi)1≤i≤N where each example xi has a label yi among K classes. We use two models fW (weak) and fM (main) which produce respective logits vectors w and m ∈ RK . The product of experts ensemble of fW and fM produces logits vector e
∀1 ≤ j ≤ K, ej = wj +mj (1) Equivalently, we have softmax(e) ∝ softmax(w) softmax(m) where is the element-wise multiplication.
Our training approach can be decomposed in two successive stages: (a) training the weak learner fW with a standard cross-entropy loss (CE) and (b) training a main (robust) model fM via product of experts (PoE) to learn from the errors of the weak learner. The core intuition of this method is to encourage the robust model to learn to make predictions that take into account the weak learner’s mistakes.
We do not make any assumption on the biases present (or not) in the dataset and rely on letting the weak learner discover them during training. Moreover, in contrast to prior work (Mahabadi et al., 2020; He et al., 2019; Clark et al., 2019) in which the weak learner had a hand-engineered bias-specific structure, our approach does not make any specific assumption on the weak learner such as its architecture, capacity, pre-training, etc. The weak learner fW is trained with standard cross-entropy.
The final goal is producing main model fM . After training, the weak model fW is frozen and used only as part of the product of experts. Since the weak model is frozen, only the main model fM receives gradient updates during training. This is similar to He et al. (2019); Clark et al. (2019) but differs from Mahabadi et al. (2020) who train both weak and main models jointly. For convenience, we refer to the cross-entropy of the prediction e of Equation 1 as the PoE cross-entropy.
3.2 ANALYSIS: THE ROBUST MODEL LEARNS FROM THE ERRORS OF THE WEAK LEARNER
To better explore the impact of PoE training with a weak learner, we consider the special case of binary classification with logistic regression. Here w and m are scalar logits and the softmax becomes a sigmoid. The loss of the product of experts for a single positive example is:
LPoE,binary = −m− w + log ( 1 + exp(m+ w) ) (2)
Logit w is a fixed value since the weak learner is frozen. We also define the entropy of the weak learner asHw = −p log(p)− (1− p) log(1− p) where p = σ(w) as our measure of certainty. Different values of w from the weak learner induce different gradient updates in the main model. Figure 1a shows the gradient update of the main model logitm. Each of the three curves corresponds to a different value of w the weak model.
• Weak Model is Certain / Incorrect: the first case (in blue) corresponds to low values of w. The entropy is low and the loss of the weak model is high. The main model receives gradients even when it is classifying the point correctly (≈ m = 5) which encourages m to compensate for the weak model’s mistake.
• Weak Model is Uncertain: the second case (in red) corresponds tow = 0 which means the weak model’s entropy is high (uniform probability over all classes). In this case, product of experts is equal to the main model, and the gradient is equal to the one obtained with cross-entropy.
• Weak Model is Certain / Correct: the third case (in green) corresponds to high values of w. The entropy is low and the loss of the weak model is low. In this case, m’s gradients are “cropped” early on and the main model receives less gradients on average. When w is extremely high, m receives no gradient (and the current example is simply ignored).
Put another way, the logit values for whichm receives gradients are shifted according the correctness and certainty of the weak model. Figure 1b shows the concentration of training examples of MNLI
(Williams et al., 2018) projected on the 2D coordinates (correctness, certainty) from a trained weak learner (described in Section 4.1). We observe that there are many examples for the 3 cases. More crucially, we verify that the group certain / incorrect is not empty since the examples in this group encourage the model to not rely on the dataset biases.
(a) Gradient update of m for different values of w on binary classification. (b) 2D projection of MNLI examples from a trained weak learner. Colors indicate the concentration and are in log scale.
Figure 1: The analysis of the gradients reveals 3 regimes where the gradient is shifted by the certainty and correctness of the weak learner. These 3 regions are present in real dataset such as MNLI.
3.3 CONNECTION TO DISTILLATION
Our product of experts setup bears similarities with knowledge distillation (Hinton et al., 2015) where a student network is trained to mimic the behavior of a frozen teacher model.
In our PoE training, we encourage the main model fM (analog to the student network) to learn from the errors of the weak model fW (analog to the teacher network): instead of mimicking, it learns an orthogonal behavior when the teacher is incorrect. To recognize errors from the weak learner, we use the gold labels which alleviates the need to use pseudo-labelling or data augmentation as it is commonly done in distillation setups (Furlanello et al., 2018; Xie et al., 2020).
Similarly to Hinton et al. (2015), our final loss is a linear combination of the original cross-entropy loss (CE) and the PoE cross-entropy. We refer to this multi-loss objective as PoE + CE.
4 EXPERIMENTS
We consider several different experimental settings that explore the use of a weak learner to isolate and train against dataset biases. All the experiments are conducted on English datasets, and follow the standard setup for BERT training. Our main model is BERT-base (Devlin et al., 2019) with 110M parameters. Except when indicated otherwise, our weak learner is a significantly smaller pre-trained masked language model known as TinyBERT (Turc et al., 2019) with 4M parameters (2 layers, hidden size of 128). The weak learner is fine-tuned on exactly the same data as our main model. For instance, when trained on MNLI, it gets a 67% accuracy on the matched development set (compared to 84% for BERT-base).
Part of our discussion relies on natural language inference, which has been widely studied in this area. The classification task is to determine whether a hypothesis statement is true (entailment), false (contradiction) or undetermined (neural) given a premise statement. MNLI (Williams et al., 2018) is the canonical large-scale English dataset to study this problem with 433K labeled examples. For evaluation, it features matched sets (examples from domains encountered in training) and mismatched sets (domains not-seen during training).
Experiments first examine qualitatively the spurious correlations picked up by the method. We then verify the validity of the method on a synthetic experimental setup. Finally, we verify the impact of our method by evaluating robust models on several out-of-distribution sets and discuss the choice of the weak learner.
4.1 WEAK LEARNERS REDISCOVER PREVIOUSLY REPORTED DATASET BIASES
Most approaches for circumventing dataset bias require modeling the bias explicitly, for example using a model limited to only the hypothesis in NLI (Gururangan et al., 2018). These approaches are effective, but require isolating specific biases present in a dataset. Since this process is costly, time consuming and error-prone, it is unrealistic to expect such analysis for all new datasets. On the contrary, we hypothesize that weak learners might operate like rapid surface learners (Zellers et al., 2019), picking up on dataset biases without specific signal or input cura-
tion and being rather certain of their biased errors (high certainty on the biased prediction errors).
We first investigate whether our weak learner re-discovers two well-known dataset biases reported on NLI benchmarks: (a) the presence of negative word in the hypothesis is highly correlated with the contradiction label (Poliak et al., 2018; Gururangan et al., 2018), (b) high word overlap between the premise and the hypothesis is highly correlated with the entailment label (McCoy et al., 2019b).
To this aim, we fine-tune a weak learner on MNLI (Williams et al., 2018). Hyper-parameters can be found in Appendix A.1. We extract and manually categorize 1,000 training examples wrongly predicted by the weak learner (with a high loss and a high certainty). Table 1 breaks them down per category. Half of these incorrect examples are wrongly predicted as Contradiction and almost all of these contain a negation1 in the hypothesis. Another half of the examples are incorrectly predicted as Entailment, a majority of these presenting a high lexical overlap between the premise and the hypothesis (5 or more words in common). The weak learner thus appears to predict with highcertainty a Contradiction label whenever the hypothesis contains a negative word and with highcertainty an Entailment label whenever there is a strong lexical overlap between premise/hypothesis. Table 6 in Appendix A.3 presents qualitative examples of dataset biases identified by the fine-tuned weak learner.
This analysis is based on a set of biases referenced in the literature and does not exclude the possibility of other biases being detected by the weak learner. For instance, during this investigation we notice that the presence of “negative sentiment” words (for instance: dull, boring) in the hypothesis appears to be often indicative of a Contradiction prediction. We leave further investigation on such behaviors to future work.
4.2 SYNTHETIC EXPERIMENT: CHEATING FEATURE
We consider a controlled synthetic experiment described in He et al. (2019); Clark et al. (2019) that simulates bias. We modify 20,000 MNLI training examples by injecting a cheating feature which encodes an example’s label with probability pcheat and a random label selected among the two incorrect labels otherwise. For simplicity, we consider the first 20,000 examples. On the evaluation sets, the cheating feature is random and does not convey any useful information. In the present experiment, the cheating feature takes the form of a prefix added to the hypothesis (“0” for Contradiction, “1” for Entailment, “2” for Neutral). We train the weak and main models on these 20,000 examples and evaluate their accuracy on the matched development set.2 We expect a biased model to rely mostly on the cheating feature thereby leading to poor evaluation performance.
Figure 2 shows the results. As the proportion of examples containing the bias increases, the evaluation accuracy of the weak learner quickly decreases to reach 0% when pcheat = 0.9. The weak
1We use the following list of negation words: no, not, none, nothing, never, aren’t, isn’t, weren’t, neither, don’t, didn’t, doesn’t, cannot, hasn’t, won’t.
2We observe similar trends on the mismatched development set.
learner detects the cheating feature during training and is mainly relying on the synthetic bias which is not directly indicative of the gold label.
Both He et al. (2019) and Clark et al. (2019) protect against the reliance on this cheating feature by ensembling the main model with a biased model that only uses the hypothesis (or its first token). We instead train the main model in the product of experts setting, relying on the weak learner to identify the bias. Figure 2 shows that when a majority of the training examples contain the bias (pcheat ≥ 0.6), the performance of the model trained with crossentropy drops faster than the one trained in PoE. PoE training leads to a more robust model by encouraging it to learn from the mistakes of the weak learner. As pcheat comes close to 1, the model’s training enters a “few-shot regime” where there are very few incorrectly predicted biased examples to learn from (examples where following the biased heuristic lead to a wrong answer) and the performance of the model trained with PoE drops as well.
4.3 ADVERSARIAL DATASETS: NLI AND QA
NLI The HANS adversarial dataset (McCoy et al., 2019b) was constructed by writing templates to generate examples with a high premise/hypothesis word overlap to attack models that rely on this bias. In one template the word overlap generates entailed premise/hypothesis pairs (heuristic-entailed examples), whereas in another the examples contradict the heuristic (nonheuristic-entailed). The dataset contains 30K evaluation examples equally split between both.
Table 2 shows that the weak learner exhibits medium performance on the in-distribution sets (MNLI) and that on out-of-distribution evaluation (HANS), it relies heavily on the word overlap heuristic. Product of experts training is effective at reducing the reliance on biases and leads to significant gains on the heuristic-non-entailed examples when compared to a model trained with standard crossentropy leading to an improvement of +24%.
The small degradation on in-distribution data is likely because product of experts training does not specialize for in-distribution performance but focuses on the weak model errors (He et al., 2019). The linear combination of the original cross-entropy loss and the product of experts loss (PoE + CE) aims at counteracting this effect. This multi-loss objective trades off out-of-distribution generalization for in-distribution accuracy. A similar trade-off between accuracy and robustness has been reported in adversarial training (Zhang et al., 2019; Tsipras et al., 2019). In Appendix A.6, we detail the influence of this multi-loss objective.
We also evaluate our method on MNLI’s hard test set (Gururangan et al., 2018) which is expected to be less biased than MNLI’s standard split. These examples are selected such that a hypothesis-only model cannot predict the label accurately. Table 2 shows the results of this experiment. Our method surpasses the performance of a PoE model trained with a hypothesis-only biased learner. Results on the mismatched set are given in Appendix A.4.
QA Question answering models often rely on heuristics such as type and keyword-matching (Weissenborn et al., 2017) that can do well on benchmarks like SQuAD (Rajpurkar et al., 2016). We evaluate on the Adversarial SQuAD dataset (Jia & Liang, 2017) built by appending distractor sentences to the passages in the original SQuAD. Distractors are constructed such that they look like a plausible answer to the question while not changing the correct answer or misleading humans.
Results on SQuAD v1.1 and Adversarial SQuAD are listed in Table 3. The weak learner alone has low performance both on in-distribution and adversarial sets. PoE training improves the adversarial performance (+1% on AddSent) while sacrificing some in-distribution performance. A multi-loss
optimization closes the gap and even boosts adversarial robustness (+3% on AddSent and +2% on AddOneSent). In contrast to our experiments on MNLI/HANS, multi-loss training thus leads here to better performance on out-of-distribution as well. We hypothesize that in this dataset, the weak learner picks up more useful information and removing it entirely might be non-optimal. Multi-loss in this case allows us to strike a balance between learning from, or removing, the weak learner.
5 ANALYSIS
5.1 REDUCING BIAS: CORRELATION ANALYSIS
To investigate the behavior of the ensemble of the weak and main learner, we compute the Pearson correlation between the element-wise loss of the weak (biased) learner and the loss of the trained models following Mahabadi et al. (2020). A correlation of 1 indicates a linear relation between the per-example losses (the two learners make the same mistakes), and 0 indicates the absence of linear correlation (models’ mistakes are uncorrelated). Figure 3 shows that models trained with a linear combination of the PoE cross-entropy and the standard cross-entropy have a higher correlation than when trained solely with PoE. This confirms that PoE training is effective at reducing biases uncovered by the weak learner and re-emphasizes that adding standard cross-entropy leads to a trade-off between the two.
5.2 HOW WEAK DO THE WEAK LEARNERS NEED TO BE?
We consider parameter size as a measure of the capacity or ”weakness” of the weak learner. We fine-tune different sizes of BERT (Turc et al., 2019) ranging from 4.4 to 41.4 million parameters and use these as weak models in a PoE setting. Figure 4b shows the accuracies on MNLI and HANS of the weak learners and main models trained with various weak learners.
Varying the capacity of the weak models affects both in-distribution and out-ofdistribution performance. Out-of-distribution performance of the main model increases as the weak model becomes stronger (more parameters) up to a certain point while in-distribution performances drop slightly at first and then more strongly. When trained jointly with the larger MediumBERT weak learner (41.4 million parameters), the main model gets 97% accuracy on HANS’s heuristic-non-entailed set but a very low accuracy on the in-distribution examples (28% on MNLI and 3% on the heuristic-entailed examples).
As a weak model grows in capacity, it becomes a better learner. The average loss decreases and the model becomes more confident in its predictions. As a result, the group
certain / correct becomes more populated and the main model receives on average a smaller gradient magnitude per input. On the contrary, the certain / incorrect group (which generally align with out-of-distribution samples and induce higher magnitude gradient updates, encouraging generalization at the expense of in-distribution performance) becomes less populated. These results corroborate and complement insights from Yaghoobzadeh et al. (2019). This is also reminiscent of findings from Vodrahalli et al. (2018) and Shrivastava et al. (2016): not all training samples contribute equally towards learning and in some cases, a carefully selected subset of the training set is sufficient to match (or surpass) the performance on the whole set.
5.3 DE-BIASING IS STILL EFFECTIVE WHEN DATASET BIASES ARE UNKNOWN OR HARD TO DETECT
While it is difficult to enumerate all sources of bias, we focus in this work on superficial cues that correlate with the label in the training set but do not transfer. These superficial cues correlate with what can be captured by a weak model. For instance, Conneau et al. (2018) suggest that word presence can be detected with very shallow networks (linear classifier on top of FastText bag of words) as they show very high accuracy for Word Content, the probing task of detecting which of the 1’000 target words is present in a given sentence.
To verify that a weak model is still effective with unknown or hard to detect biases, we consider an example where the bias is only present in a small portion of the training. We remove from the MNLI training set all the examples (192K) that exhibit one of the two biases detailed in Section 4.1: high word overlap between premise and hypothesis with entailment label; and negation in the hypothesis with contradiction label. We are left with 268K training examples.
We apply our de-biasing method with these examples as our training set. For comparison, we train a main model with standard cross-entropy on the same subset of selected examples. Our results are shown in Table 4 and confirm on HANS that our de-biasing method is still effective even when the bias is hard to detect. Note that the accuracies on MNLI can not be directly compared to results in Table 2: the class imbalance in the selected subset of examples lead to a harder optimization problem explaining the difference of performance.
We present complementary analyses in Appendix. To further show the effectiveness of our method, we included in Appendix A.2 an additional experiment on facts verification (Thorne et al., 2018; Schuster et al., 2019). In Appendix A.5, we evaluate the ability of our method to generalize to other domains that do not share the same annotation artifacts. We highlight the trade-off between in-distribution performance and out-of-distribution robustness by quantifying the influence of the multi-loss objective in Appendix A.6 and draw a connection between our 3 groups of examples and recently introduced Data Maps (Swayamdipta et al., 2020).
6 CONCLUSION
We have presented an effective method for training models robust to dataset biases. Leveraging a weak learner with limited capacity and a modified product of experts training setup, we show that dataset biases do not need to be explicitly known or modeled to be able to train models that can generalize significantly better to out-of-distribution examples. We discuss the design choices for such weak learner and investigate how using higher-capacity learners leads to higher out-ofdistribution performance and a trade-off with in-distribution performance. We believe that such approaches capable of automatically identifying and mitigating datasets bias will be essential tools for future bias-discovery and mitigation techniques.
ACKNOWLEDGEMENTS
This research was supported by the ISRAEL SCIENCE FOUNDATION (grant No. 448/20).
A APPENDIX
A.1 EXPERIMENTAL SETUP AND FINE-TUNING HYPER-PARAMETERS
Our code is based on the Hugging Face Transformers library (Wolf et al., 2019). All of our experiments are conducted on single 16GB V100 using half-precision training for speed.
NLI We fine-tuned a pre-trained TinyBERT (Turc et al., 2019) as our weak learner. We use the following hyper-parameters: 3 epochs of training with a learning rate of 3e−5, and a batch size of 32. The learning rate is linearly increased for 2000 warming steps and linearly decreased to 0 afterward. We use an Adam optimizer β = (0.9, 0.999), = 1e−8 and add a weight decay of 0.1. Our robust model is BERT-base-uncased and uses the same hyper-parameters. When we train a robust model with a multi-loss objective, we give the standard CE a weight of 0.3 and the PoE cross-entropy a weight of 1.0. Because of the high variance on HANS (McCoy et al., 2019a), we average numbers on 6 runs with different seeds.
SQuAD We fine-tuned a pre-trained TinyBERT (Turc et al., 2019) as our weak learner. We use the following hyper-parameters: 3 epochs of training with a learning rate of 3e−5, and a batch size of 16. The learning rate is linearly increased for 1500 warming steps and linearly decreased to 0 afterward. We use an Adam optimizer β = (0.9, 0.999), = 1e−8 and add a weight decay of 0.1. Our robust model is BERT-base-uncased and uses the same hyper-parameters. When we train a robust model with a multi-loss objective, we give the standard CE a weight of 0.3 and the PoE cross-entropy a weight of 1.0.
Fact verification We fine-tuned a pre-trained TinyBERT (Turc et al., 2019) as our weak learner. We use the following hyper-parameters: 3 epochs of training with a learning rate of 2e−5, and a batch size of 32. The learning rate is linearly increased for 1000 warming steps and linearly decreased to 0 afterward. We use an Adam optimizer β = (0.9, 0.999), = 1e−8 and add a weight decay of 0.1. Our robust model is BERT-base-uncased and uses the same hyper-parameters. When we train a robust model with a multi-loss objective, we give the standard CE a weight of 0.3 and the PoE cross-entropy a weight of 1.0. We average numbers on 6 runs with different seeds.
A.2 ADDITIONAL EXPERIMENT: FACT VERIFICATION
Following Mahabadi et al. (2020), we also experiment on a fact verification dataset. The FEVER dataset (Thorne et al., 2018) contains claim-evidences pairs generated from Wikipedia. Schuster et al. (2019) collected a new evaluation set for the FEVER dataset to avoid the biases observed in the claims of the benchmark. The authors symmetrically augment the claim-evidences pairs of the FEVER evaluation to balance the detected artifacts such that solely relying on statistical cues in claims would lead to a random guess. The collected dataset is challenging, and the performance of the models relying on biases evaluated on this dataset drops significantly.
Our results are in Table 5. Our method is again effective at removing potential biases present in the training set and shows strong improvements on the symmetric test set.
A.3 SOME EXAMPLES OF DATASET BIASES DETECTED BY THE WEAK LEARNER
In Table 6, we show a few qualitative examples of dataset biases detected by a fine-tuned weak learner.
A.4 DETAILED RESULTS ON HANS AND MNLI MISMATCHED
We report the detailed results per heuristics on HANS in Table 7 and the results on the mismatched hard test set of MNLI in Table 8.
A.5 NLI: TRANSFER EXPERIMENTS
As highlighted in Section 4.3, our method is effective at improving robustness to adversarial settings that specifically target dataset biases. We further evaluate how well our method improves
generalization to domains that do not share the same annotation artifacts. Mahabadi et al. (2020) highlighted that product of experts is effective at improving generalization to other NLI benchmarks when trained on SNLI (Bowman et al., 2015). We follow the same setup: notably, we perform a sweep on the weight of the cross-entropy in our multi-loss objective and perform model selection on the development set of each dataset. We evaluate on SciTail (Khot et al., 2018), GLUE benchmark’s diagnostic test (Wang et al., 2018), AddOneRTE (AddOne) (Pavlick & Callison-Burch, 2016), Definite Pronoun Resolution (DPR) (Rahman & Ng, 2012), FrameNet+ (FN+) (Pavlick et al., 2015) and Semantic Proto-Roles (SPR) (Reisinger et al., 2015). We also evaluate on the hard SNLI test set (Gururangan et al., 2018), which is a set where a hypothesis-only model cannot solve easily.
Table 9 shows the results. Without explicitly modeling the bias in the dataset, our method matches or surpasses the generalization performance previously reported, at the exception of GLUE’s diagnostic dataset and SNLI’s hard test set. Moreover, we notice that the multi-loss objective sometimes leads to a stronger performance, which suggests that in some cases, it can be sub-optimal to completely remove the information picked up by the weak learner. We hypothesize that the multi-loss objective balances the emphasis on domain-specific features (favoring in-distribution performance) and their removal through de-biasing (benefiting domain transfer performance). This might explain why we do not observe improvements on SNLI’s hard test set and GLUE’s diagnostic set in the PoE setting.
A.6 INFLUENCE OF MULTI-LOSS OBJECTIVE
Our best performing setup features a linear combination of the PoE cross-entropy and the standard cross-entropy. We fix the weight of the PoE cross-entropy to 1. and modulate the linear coefficient α of the standard cross-entropy. Figure 5 shows the influence of this multi-loss objective. As the weight of the standard cross-entropy increases, the in-distribution performance increases while the out-of-distribution performance decreases. This effect is particularly noticeable on MNLI/HANS (see Figure 5a). Surprisingly, this trade-off is less pronounced on SQuAD/Adversarial SQuAD: the F1 development score increases from 85.43 for α = 0.1 to 88.14 for α = 1.9 while decreasing from 56.67 to 55.06 on AddSent.
Our multi-loss objective is similar to the annealing mechanism proposed in (Utama et al., 2020). In fact, as the annealing coefficient decreases, the modified probability distribution of the weak model converges to the uniform distribution. As seen in Section 3.2, when the distribution of the weak model is close to the uniform distribution (high-uncertainty), the gradient of the loss for PoE is equivalent to the gradient of the main model trained without PoE (i.e. the standard cross-entropy). In this work, we consider a straight-forward setup where we linearly combine the two losses throughout the training with fixed coefficients.
A.7 CONNECTION TO Data Maps SWAYAMDIPTA ET AL. (2020)
We hypothesize that our three identified groups (certain / incorrect, certain / correct, and uncertain) overlap with the regions identified by data cartographies (Swayamdipta et al., 2020). The authors project each training example onto 2D coordinates: confidence, variability. The first one is the mean of the gold label probabilities predicted for each example across training epochs. The second one is the standard deviation. Confidence is closely related to the loss (intuitively, a high-
confidence example is “easier” to predict). Variability is connected to the uncertainty (the probability of the true class for high-variability examples fluctuates during the training underlying the model’s indecisiveness).
Our most interesting group (certain / incorrect) bears similarity with the ambiguous region (the model is indecisive about these instances frequently changing its prediction across training epochs) and the hard-to-learn region (which contains a significant proportion of mislabeled examples). The authors observe that the examples in these 2 regions play an important role in out-of-distribution generalization. Our findings go in the same direction as the weak model encourages the robust model to pay a closer look at these certain / incorrect examples during training.
To verify this claim, we follow their procedure and log the training dynamics of our weak learner trained on MNLI. Figure 6 (left) shows the Data Map obtained with our weak learner. For each of our 3 groups, we select the 10,000 examples that emphasized the most the characteristic of the group. For instance, for uncertain, we take the 10,000 examples with the highest entropy. In Figure 6 (right) we plot these 30,000 examples onto the Data Map. We observe that our certain / correct examples are in the easy-to-learn region, our certain / incorrect examples are in the hardto-learn region and our uncertain examples are mostly in the ambiguous region.
Conversely, we verify that the examples in the ambiguous region are mostly in our uncertain group, the examples from the hard-to-learn are mostly in uncertain and certain / incorrect, and the examples from the easy-to-learn are mainly in certain / correct and uncertain. | 1. What is the focus of the paper regarding training robust models?
2. What are the reasons for rejecting the paper, particularly concerning prior works and lack of contributions?
3. How does the reviewer assess the novelty and significance of the proposed approach compared to previous research?
4. Are there any concerns regarding the similarity between the proposed method and existing works?
5. How does the reviewer evaluate the clarity and quality of the paper's content? | Review | Review
Paper summary: The authors argue that they have proposed a method to train robust models to biases without having prior knowledge of the biases. They argue also to provide analysis on how weak learner capacity impacts the in-domain/out-of-domain performance.
Reasons to reject:
The authors argue they have shown the model with limited capacity capture biases. However, this has been shown already in [1] in 2019 and therefore is not a contribution of the authors.
The main method proposed in this paper, is exactly the same method proposed in [2]. Please note that [2] was already available in early July 2020, and on top of existing work, the paper does not provide other contributions.
About the third argued contribution on showing how the performance of the debiasing method change based on the capacity of weak learners, in [1], the authors included the discussion between the choice of weak learners on their impact. Though the method in [1] is different, the discussion in that paper still would apply here as well. Please refer to table 1-3 and Figure 1 in [1].
Given the points above, and since the main method in the paper is proposed in [2], the paper does not provide enough contributions to be suitable for the ICLR venue.
[1] Robust Natural Language Inference Models with Example Forgetting, Yaghoobzadeh et al, https://arxiv.org/pdf/1911.03861.pdf, 2019 [2] Towards Debiasing NLU Models from Unknown Biases, Utama et al, 13 July 2020, https://openreview.net/forum?id=UHpxm2K-jHE, EMNLP 2020 |
ICLR | Title
SGD Through the Lens of Kolmogorov Complexity
Abstract
We initiate a thorough study of the dynamics of stochastic gradient descent (SGD) under minimal assumptions using the tools of entropy compression. Specifically, we characterize a quantity of interest which we refer to as the accuracy discrepancy. Roughly speaking, this measures the average discrepancy between the model accuracy on batches and large subsets of the entire dataset. We show that if this quantity is sufficiently large, then SGD finds a model which achieves perfect accuracy on the data in O(1) epochs. On the contrary, if the model cannot perfectly fit the data, this quantity must remain below a global threshold, which only depends on the size of the dataset and batch. We use the above framework to lower bound the amount of randomness required to allow (non-stochastic) gradient descent to escape from local minima using perturbations. We show that even if the model is extremely overparameterized, at least a linear (in the size of the dataset) number of random bits are required to guarantee that GD escapes local minima in subexponential time.
1 INTRODUCTION
Stochastic gradient descent (SGD) is at the heart of modern machine learning. However, we are still lacking a theoretical framework that explains its performance for general, non-convex functions. Current results make significant assumptions regarding the model. Global convergence guarantees only hold under specific architectures, activation units, and when models are extremely overparameterized (Du et al., 2019; Allen-Zhu et al., 2019; Zou et al., 2018; Zou and Gu, 2019). In this paper, we take a step back and explore what can be said about SGD under the most minimal assumptions. We only assume that the loss function is differentiable and L-smooth, the learning rate is sufficiently small and that models are initialized randomly. Clearly, we cannot prove general convergence to a global minimum under these assumptions. However, we can try and understand the dynamics of SGD - what types of execution patterns can and cannot happen.
Motivating example: Suppose hypothetically, that for every batch, the accuracy of the model after the Gradient Descent (GD) step on the batch is 100%. However, its accuracy on the set of previously seen batches (including the current batch) remains at 80%. Can this process go on forever? At first glance, this might seem like a possible scenario. However, we show that this cannot be the case. That is, if the above scenario repeats sufficiently often the model must eventually achieve 100% accuracy on the entire dataset.
To show the above, we identify a quantity of interest which we call the accuracy discrepancy (formally defined in Section 3). Roughly speaking, this is how much the model accuracy on a batch differs from the model accuracy on all previous batches in the epoch. We show that when this quantity (averaged over epochs) is higher than a certain threshold, we can guarantee that SGD convergence to 100% accuracy on the dataset within O(1) epochs w.h.p1. We note that this threshold is global, that is, it only depends on the size of the dataset and the size of the batch. In doing so, we provide a sufficient condition for SGD convergence.
The above result is especially interesting when applied to weak models that cannot achieve perfect accuracy on the data. Imagine a dataset of size n with random labels, a model with n0.99 parameters, and a batch of size log n. The above implies that the accuracy discrepancy must eventually go below
1With high probability means a probability of at least 1− 1/n, where n is the size of the dataset.
the global threshold. In other words, the model cannot consistently make significant progress on batches. This is surprising because even though the model is underparameterized with respect to the entire dataset, it is extremely overparameterized with respect to the batch. We verify this observation experimentally (Appendix B). This holds for a single GD step, but what if we were to allow many GD steps per batch, would this mean that we still cannot make significant progress on the batch? This leads us to consider the role of randomness in (non-stochastic) gradient descent.
It is well known that overparameterized models trained using SGD can perfectly fit datasets with random labels (Zhang et al., 2017). It is also known that when models are sufficiently overparameterized (and wide) GD with random initialization convergences to a near global minimum (Du et al., 2019). This leads to an interesting question: how much randomness does GD require to escape local minima efficiently (in polynomial time)? It is obvious that without randomness we could initialize GD next to a local minimum, and it will never escape it. However, what about the case where we are provided an adversarial input and we can perturb that input (for example, by adding a random vector to it), how many bits of randomness are required to guarantee that after the perturbation GD achieves good accuracy on the input in polynomial time?
In Section 4 we show that if the amount of randomness is sublinear in the size of the dataset, then for any differentiable and L-smooth model class (e.g., a neural network architecture), there are datasets that require an exponential running time to achieve any non-trivial accuracy (i.e., better than 1/2 + o(1) for a two-class classification task), even if the model is extremely overparameterized. This result highlights the importance of randomness for the convergence of gradient methods. Specifically, it provides an indication of why SGD converges in certain situations and GD does not. We hope this result opens the door to the design of randomness in other versions of GD.
Outline of our techniques We consider batch SGD, where the dataset is shuffled once at the beginning of each epoch and then divided into batches. We do not deal with the generalization abilities of the model. Thus, the dataset is always the training set. In each epoch, the algorithm goes over the batches one by one, and performs gradient descent to update the model. This is the "vanilla" version of SGD, without any acceleration or regularization (for a formal definition, see Section 2). For the sake of analysis, we add a termination condition after every GD step: if the accuracy on the entire dataset is 100% we terminate. Thus, in our case, termination implies 100% accuracy.
To achieve our results, we make use of entropy compression, first considered by Moser and Tardos (2010) to prove a constructive version of the Lovász local lemma. Roughly speaking, the entropy compression argument allows one to bound the running time of a randomized algorithm2 by leveraging the fact that a random string of bits (the randomness used by the algorithm) is computationally incompressible (has high Kolmogorov complexity) w.h.p. If one can show that throughout the execution of the algorithm, it (implicitly) compresses the randomness it uses, then one can bound the number of iterations the algorithm may execute without terminating. To show that the algorithm has such a property, one would usually consider the algorithm after executing t iterations, and would try to show that just by looking at an "execution log" of the algorithm and some set of "hints", whose size together is considerably smaller than the number of random bits used by the algorithm, it is possible to reconstruct all of the random bits used by the algorithm.
We apply this approach to SGD with an added termination condition when the accuracy over the entire dataset is 100%. Thus, termination in our case guarantees perfect accuracy. The randomness we compress is the bits required to represent the random permutation of the data at every epoch. So indeed the longer SGD executes, the more random bits are generated. We show that under our assumptions it is possible to reconstruct these bits efficiently starting from the dataset X and the model after executing t epochs. The first step in allowing us to reconstruct the random bits of the permutation in each epoch is to show that under the L-smoothness assumption and a sufficiently small step size, SGD is reversible. That is, if we are given a model Wi+1 and a batch Bi such that Wi+1 results from taking a gradient step with model Wi where the loss is calculated with respect to Bi, then we can uniquely retrieve Wi using only Bi and Wi+1. This means that if we can efficiently encode the batches used in every epoch (i.e., using less bits than encoding the entire permutation of the data), we can also retrieve all intermediate models in that epoch (at no additional cost). We prove this claim in Section 2.
2We require that the number of the random bits used is proportional to the execution time of the algorithm. That is, the algorithm flips coins for every iteration of a loop, rather than just a constant number at the beginning of the execution.
The crux of this paper is to show that when the accuracy discrepancy is high for a certain epoch, the batches can indeed be compressed. To exemplify our techniques let us consider the scenario where, in every epoch, just after a single GD step on a batch we consistently achieve perfect accuracy on the batch. Let us consider some epoch of our execution, assume we have access to X , and let Wf be the model at the end of the epoch. If the algorithm did not terminate, then Wf has accuracy at most 1− on the entire dataset (assume for simplicity that is a constant). Our goal is to retrieve the last batch of the epoch, Bf ⊂ X (without knowing the permutation of the data for the epoch). A naive approach would be to simply encode the indices in X of the elements in the batch. However, we can use Wf to achieve a more efficient encoding. Specifically, we know that Wf achieves 1.0 accuracy on Bf but only 1− accuracy on X . Thus it is sufficient to encode the elements of Bf using a smaller subset of X (the elements classified correctly by Wf , which has size at most (1− ) |X|). This allows us to significantly compress Bf . Next, we can use Bf and Wf together with the reversibility of SGD to retrieve Wf−1. We can now repeat the above argument to compress Bf−1 and so on, until we are able to reconstruct all of the random bits used to generate the permutation of X in the epoch. This will result in a linear reduction in the number of bits required for the encoding.
In our analysis, we show a generalized version of the scenario above. We show that high accuracy discrepancy implies that entropy compression occurs. For our second result, we consider a modified SGD algorithm that instead of performing a single GD step per batch, first perturbs the batch with a limited amount of randomness and then performs GD until a desired accuracy on the batch is reached. We assume towards contradiction that GD can always reach the desired accuracy on the batch in subexponential time. This forces the accuracy discrepancy to be high, which guarantees that we always find a model with good accuracy. Applying this reasoning to models of sublinear size and data with random labels we arrive at a contradiction, as such models cannot achieve good accuracy on the data. This implies that when we limit the amount of randomness GD can use for perturbations, there must exist instances where GD requires exponential running time to achieve good accuracy.
Related work There has been a long line of research proving convergence bounds for SGD under various simplifying assumptions such as: linear networks (Arora et al., 2019; 2018), shallow networks (Safran and Shamir, 2018; Du and Lee, 2018; Oymak and Soltanolkotabi, 2019), etc. However, the most general results are the ones dealing with deep, overparameterized networks (Du et al., 2019; Allen-Zhu et al., 2019; Zou et al., 2018; Zou and Gu, 2019). All of these works make use of NTK (Neural Tangent Kernel)(Jacot et al., 2018) and show global convergence guarantees for SGD when the hidden layers have width at least poly(n,L) where n is the size of the dataset and L is the depth of the network. We note that the exponents of the polynomials are quite large.
A recent line of work by Zhang et al. (2022) notes that in many real world scenarios models do not converge to stationary points. They instead take a different approach which, similar to us, studies the dynamics of neural networks. They show that under certain assumptions (e.g., considering a fully connected architecture with sub-differentiable and coordinate-wise Lipschitz activations and weights laying on a compact set) the change in training loss gradually converges to 0, even if the full gradient norms do not vanish.
In (Du et al., 2017) it was shown that GD can take exponential time to escape saddle points, even under random initialization. They provide a highly engineered instance, while our results hold for many model classes of interest. Jin et al. (2017) show that adding perturbations during the executions of GD guarantees that it escapes saddle points. This is done by occasionally perturbing the parameters within a ball of radius r, where r depends on the properties of the function to be optimized. Therefore, a single perturbation must require an amount of randomness linear in the number of parameters.
2 PRELIMINARIES
We consider the following optimization problem. We are given an input (dataset) of size n. Let us denote X = {xi}ni=1 (Our inputs contain both data and labels, we do not need to distinguish them for this work). We also associate every x ∈ X with a unique id of dlog ne bits. We often consider batches of the input B ⊂ X . The size of the batch is denoted by b (all batches have the same size). We have some model whose parameters are denoted by W ∈ Rd, where d is the model dimension. We aim to optimize a goal function of the following type: f(W ) = 1n ∑ x∈X fx(W ), where the functions fx : Rd → R are completely determined by x ∈ X . We also define for every set A ⊆ X: fA(W ) = 1 |A| ∑ x∈A fx(W ). Note that fX = f .
We denote by acc(W,A) : Rd × 2X → [0, 1] the accuracy of model W on the set A ⊆ X (where we use W to classify elements from X). Note that for x ∈ X it holds that acc(W,x) is a binary value indicating whether x is classified correctly or not. We require that every fx is differentiable and L-smooth: ∀W1,W2 ∈ Rd, ‖∇fx(W1)−∇fx(W2)‖ ≤ L‖W1 −W2‖. This implies that every fA is also differentiable and L-smooth. To see this consider the following:
‖∇fA(W1)−∇fA(W2)‖ = ‖ 1 |A| ∑ x∈A ∇fx(W1)− 1 |A| ∑ x∈A ∇fx(W2)‖
= 1 |A| ‖ ∑ x∈A ∇fx(W1)−∇fx(W2)‖ ≤ 1 |A| ∑ x∈A ‖∇fx(W1)−∇fx(W2)‖ ≤ L‖W1 −W2‖
We state another useful property of fA:
Lemma 2.1. Let W1,W2 ∈ Rd and α < 1/L. For any A ⊆ X , if it holds that W1 − α∇fA(W1) = W2 − α∇fA(W2) then W1 = W2.
Proof. Rearranging the terms we get thatW1−W2 = α∇fA(W1)−α∇fA(W2). Now let us consider the norm of both sides: ‖W1−W2‖ = ‖α∇fA(W1)−α∇fA(W2)‖ ≤ α·L‖W1−W2‖ < ‖W1−W2‖ Unless W1 = W2, the final strict inequality holds which leads to a contradiction.
The above means that for a sufficiently small gradient step, the gradient descent process is reversible. That is, we can always recover the previous model parameters given the current ones, assuming that the batch is fixed. We use the notion of reversibility throughout this paper. However, in practice we only have finite precision, thus instead of R we work with the finite set F ⊂ R. Furthermore, due to numerical stability issues, we do not have access to exact gradients, but only to approximate values ∇̂fA. For the rest of this paper, we assume these values are L-smooth on all elements in Fd. That is,
∀W1,W2 ∈ Fd, A ⊆ X, ‖∇̂fA(W1)− ∇̂fA(W2)‖ ≤ L‖W1 −W2‖
This immediately implies that Lemma 2.1 holds even when precision is limited. Let us state the following theorem:
Theorem 2.2. Let W1,W2, ...,Wk ∈ Fd ⊂ Rd, A1, A2, ..., Ak ⊆ X and α < 1/L. If it holds that Wi = Wi−1 − α∇̂fAi−1(Wi−1), then given A1, A2, ..., Ak−1 and Wk we can retrieve W1.
Proof. Given Wk we iterate over all W ∈ Fd until we find W such that Wk = W − α∇̂fAi−1(W ). Using Lemma 2.1, there is only a single element such that this equality holds, and thus W = Wk−1. We repeat this process until we retrieve W1.
SGD We analyze the classic SGD algorithm presented in Algorithm 1. One difference to note in our algorithm, compared to the standard implementation, is the termination condition when the accuracy on the dataset is 100%. In practice the termination condition is not used, however, we only use it to prove that at some point in time the accuracy of the model is 100%.
Algorithm 1: SGD 1 i← 1 // epoch counter 2 W1,1 is an initial model 3 while True do 4 Take a random permutation of X , divided into batches {Bi,j}n/bj=1 5 for j from 1 to n/b do 6 if acc(Wi,j , X) = 1 then Return Wi,j 7 Wi,j+1 ←Wi,j − α∇fBi,j (Wi,j) 8 i← i+ 1, Wi,1 ←Wi−1,n/b+1
Kolmogorov complexity The Kolmogorov complexity of a string x ∈ {0, 1}∗, denoted by K(x), is defined as the size of the smallest prefix Turing machine which outputs this string. We note that this definition depends on which encoding of Turing machines we use. However, one can show that this will only change the Kolmogorov complexity by a constant factor (Li and Vitányi, 2019).
We also use the notion of conditional Kolmogorov complexity, denoted by K(x | y). This is the length of the shortest prefix Turing machine which gets y as an auxiliary input and prints x. Note that the length of y does not count towards the size of the machine which outputs x. So it can be the case that |x| |y| but it holds that K(x | y) < K(x). We can also consider the Kolmogorov complexity of functions. Let g : {0, 1}∗ → {0, 1}∗ then K(g) is the size of the smallest Turing machine which computes the function g.
The following properties of Kolmogorov complexity will be of use. Let x, y, z be three strings:
• Extra information: K(x | y, z) ≤ K(x | z) +O(1) ≤ K(x, y | z) +O(1) • Subadditivity: K(xy | z) ≤ K(x | z, y)+K(y | z)+O(1) ≤ K(x | z)+K(y | z)+O(1)
Random strings have the following useful property (Li and Vitányi, 2019): Theorem 2.3. For an n bit string x chosen uniformly at random, and some string y independent of x (i.e., y is fixed before x is chosen) and any c ∈ N it holds that Pr[K(x | y) ≥ n− c] ≥ 1− 1/2c.
Entropy and KL-divergence Our proofs make extensive use of binary entropy and KL-divergence. In what follows we define these concepts and provide some useful properties.
Entropy: For p ∈ [0, 1] we denote by h(p) = −p log p− (1− p) log(1− p) the entropy of p. Note that h(0) = h(1) = 0.
KL-divergence: For p, q ∈ (0, 1) let DKL(p ‖ q) = p log pq + (1 − p) log 1−p 1−q be the Kullback Leibler divergence (KL-divergence) between two Bernoulli distributions with parameters p, q. We also extend the above for the case where q, p ∈ {0, 1} as follows: DKL(1 ‖ q) = DKL(0 ‖ q) = 0, DKL(p ‖ 1) = log(1/p), DKL(p ‖ 0) = log(1/(1 − p)). This is just notation that agrees with Lemma 2.4. We also state the following result of Pinsker’s inequality applied to Bernoulli random variables: DKL(p ‖ q) ≥ 2(p− q)2. Representing sets Let us state some useful bounds on the Kolmogorov complexity of sets. A more detailed explanation regarding the Kolmogorov complexity of sets and permutations together with the proof to the lemma below appears in Appendix A. Lemma 2.4. Let A ⊆ B, |B| = m, |A| = γm, and let g : B → {0, 1}. For any set Y ⊆ B let Y1 = {x | x ∈ Y, g(x) = 1} , Y0 = Y \ Y1 and κY = |Y1||Y | . It holds that
K(A | B, g) ≤ mγ(log(e/γ)−DKL(κB ‖ κA)) +O(logm)
3 ACCURACY DISCREPANCY
First, let us define some useful notation (Wi,j , Bi,j are formally defined in Algorithm 1):
• λi,j = acc(Wi,j , X). This is the accuracy of the model in epoch i on the entire dataset X , before performing the GD step on batch j.
• ϕi,j = acc(Wi,j , Bi,j−1). This is the accuracy of the model on the (j − 1)-th batch in the i-th epoch after performing the GD step on the batch.
• Xi,j = ⋃j k=1Bi,k (note that ∀i,Xi,0 = ∅, Xi,n/b = X). This is the set of elements in the first j
batches of epoch i. Let us also denote nj = |Xi,j | = jb (Note that ∀j, i1, i2, |Xi1,j | = |Xi2,j |, thus i need not appear in the subscript).
• λ′i,j = acc(Wi,j , Xi,j−1), λ ′′ i,j = acc(Wi,j , X \Xi,j−1), where λ′i,j is the accuracy of the model
on the set of all previously seen batch elements, after performing the GD step on the (j − 1)-th batch and λ′′i,j is the accuracy of the same model, on all remaining elements (j-th batch onward). To avoid computing the accuracy on empty sets, λ′i,j is defined for j ∈ [2, n/b + 1] and λ′′i,j is defined for j ∈ [1, n/b]. • ρi,j = DKL(λ′i,j ‖ ϕi,j) is the accuracy discrepancy for the j-th batch in iteration i and ρi =∑n/b+1 j=2 ρi,j is the accuracy discrepancy at iteration i.
In our analysis, we consider t epochs of the SGD algorithm. Our goal for this section is to derive a connection between ∑t i=1 ρi and t. Bounding t: Our goal is to use the entropy compression argument to show that if ∑t i=1 ρi is sufficiently large we can bound t. Let us start by formally defining the random bits which the algorithm uses. Let ri be the string of random bits representing the random permutation of X at epoch i. As we consider t epochs, let r = r1r2 . . . rt.
Note that the number of bits required to represent an arbitrary permutation of [n] is given by: dlog(n!)e = n log n− n log e+O(log n) = n log(n/e) +O(log n),
where in the above we used Stirling’s approximation. Thus, it holds that |r| = t(n log(n/e) + O(log n)) and according to Theorem 2.3, with probability at least 1 − 1/n2 it holds that K(r) ≥ tn log(n/e)−O(log n). In the following lemma we show how to use the model at every iteration to efficiently reconstruct the batch at that iteration, where the efficiency of reconstruction is expressed via ρi. Lemma 3.1. It holds w.h.p that ∀i ∈ [t] that: K(ri |Wi+1,1, X) ≤ n log ne − bρi + n b ·O(log n)
Proof. Recall that Bi,j is the j-th batch in the i-th epoch, and let Pi,j be a permutation of Bi,j such that the order of the elements in Bi,j under Pi,j is the same as under ri. Note that given X , if we know the partition into batches and all permutations, we can reconstruct ri. According to Theorem 2.2, given Wi,j and Bi,j−1 we can compute Wi,j−1. Let us denote by Y the encoding of this procedure. To implement Y we need to iterate over all possible vectors in Fd and over batch elements to compute the gradients. To express this program we require auxiliary variables of size at most O(log min {d, b}) = O(log n). Thus it holds that K(Y ) = O(log n). Let us abbreviate Bi,1, Bi,2, ..., Bi,j as (Bi,k) j k=1. We write the following. K(ri | X,Wi+1,1) ≤ K(ri, Y | X,Wi+1,1) +O(1) ≤ K(ri | X,Wi+1,1, Y ) +K(Y | X,Wi+1,1) +O(1)
≤ O(log n) +K((Bi,k, Pi,k)n/bk=1 | X,Wi+1,1, Y )
≤ O(log n) +K((Bi,k)n/bk=1 | X,Wi+1,1, Y ) +K((Pi,k) n/b k=1 | X,Wi+1,1, Y )
≤ O(log n) +K((Bi,k)n/bk=1 | X,Wi+1,1, Y ) + n/b∑ j=1 K(Pi,j)
Let us bound K((Bi,k) n/b k=1 | X,Wi+1,1, Y ) by repeatedly using the subadditivity and extra information properties of Kolmogorov complexity.
K((Bi,k) n/b k=1 | X,Y,Wi+1,1) ≤ K(Bi,n/b | X,Wi+1,1) +K((Bi,k) n/b−1 k=1 | X,Y,Wi+1,1, Bi,n/b) +O(1)
≤ K(Bi,n/b | X,Wi+1,1) +K((Bi,k) n/b−1 k=1 | X,Y,Wi,n/b, Bi,n/b) +O(1) ≤ K(Bi,n/b | X,Wi+1,1) +K(Bi,n/b−1 | X,Wi,n/b, Bi,n/b)
+K((Bi,k) n/b−2 k=1 | X,Y,Wi,n/b−1, Bi,n/b, Bi,n/b−1) +O(1)
≤ ... ≤ O(n b ) + n/b∑ j=1 K(Bi,j | X,Wi,j+1, (Bi,k)n/bk=j+1) ≤ O( n b ) + n/b∑ j=1 K(Bi,j | Xi,j ,Wi,j+1)
where in the transitions we used the fact that given Wi,j , Bi,j−1 and Y we can retrieve Wi,j−1. That is, we can always bound K(... | Y,Wi,j , Bi,j−1, ...) by K(... | Y,Wi,j−1, Bi,j−1, ...) +O(1). To encode the order Pi,j inside each batch, b log(b/e) +O(log b) bits are sufficient. Finally we get that: K(ri | X,Wi+1,1) ≤ O(nb ) + ∑n/b j=1[K(Bi,j | Xi,j ,Wi,j+1) + b log(b/e) +O(log b)].
Let us now boundK(Bi,j−1 | Xi,j−1,Wi,j). KnowingXi,j−1 we know thatBi,j−1 ⊆ Xi,j−1. Thus we need to use Wi,j to compress Bi,j−1. Applying Lemma 2.4 with parameters A = Bi,j−1, B = Xi,j−1, γ = b/nj−1, κA = ϕi,j , κB = λ ′ i,j and g(x) = acc(Wi,j , x). We get the following:
K(Bi,j−1 | Xi,j−1,Wi,j) ≤ b(log( e · nj−1
b )− ρi,j) +O(log nj−1)
Adding b log(b/e) +O(log b) to the above, we get the following bound on every element in the sum: b(log( e · nj−1
b )− ρi,j) + b log(b/e) +O(log b) +O(log nj−1) ≤ b log nj−1 − bρi,j +O(log nj−1)
Note that the most important term in the sum is −bρi,j . That is, the more the accuracy of Wi,j on the batch, Bi,j−1, differs from the accuracy of Wi,j on the set of elements containing the batch, Xi,j−1, we can represent the batch more efficiently. Let us now bound the sum: ∑n/b+1 j=2 [b log nj−1 − bρi,j + O(log nj−1)]. Let us first bound the sum over b log nj−1: n/b+1∑ j=2 b log nj−1 = n/b∑ j=1 b log jb = n/b∑ j=1 b(log b+ log j)
= n log b+ b log(n/b)! = n log b+ n log n
b · e +O(log n) = n log
n e +O(log n)
Finally, we can write that:
K(ri | X,Wi+1,1) ≤ O( n
b ) + n/b+1∑ j=2 [b log nj−1 − bρi,j +O(log n)] ≤ n log n e − bρi + n b ·O(log n)
Using the above we know that when the value ρi is sufficiently high, the random permutation of the epoch can be compressed. We use the fact that random strings are incompressible to bound 1 t ∑t i=1 ρi. Theorem 3.2. If the algorithm does not terminate by the t-th iteration, then it holds w.h.p that ∀t, 1t ∑t i=1 ρi ≤ O( n logn b2 ).
Proof. Using arguments similar to Lemma 3.1, we can show that K(r,W1,1 | X) ≤ K(Wt+1,1) + O(t)+ ∑t k=1K(rk | X,Wk+1,1) (formally proved in Lemma A.3). Combining this with Lemma 3.1, we get that K(r,W1,1 | X) ≤ K(Wt+1,1) + t[n(log(n/e) + n·O(logn)b − bρi +O(log n)]. Our proof implies that we can reconstruct not only r, but also W1,1 using X,Wt+1,1. Due to the incompressibility of random strings, we get that w.h.pK(r,W1,1 | X) ≥ d+tn log(n/e)−O(log n). Combining the lower and upper bound for K(r,W1,1 | X) we can get the following inequality:
d+ tn log(n/e)−O(log n) ≤ d+ t[n(log(n/e) + n ·O(log n) b +O(log n)]− t∑ i=1 bρi (1)
=⇒ 1 t t∑ i=1 ρi ≤ n ·O(log n) b2 + O(log n)
b︸ ︷︷ ︸ β(n,b)
+ O(log n)
bt = O(
n log n
b2 )
Let β(n, b) be the exact value of the asymptotic expression in Inequality 1. Theorem 3.2 says that as long as SGD does not terminate the average accuracy discrepeancy cannot be too high. Using the contra-positive we get the following useful corollary (proof is deferred to Appendix A.3).
Corollary 3.3. If ∀k, 1k ∑k i=1 ρi > β(n, b) + γ, for γ = Ω(b
−1 log n), then w.h.p SGD terminates within O(1) epochs.
The case for weak models Using the above we can also derive some interesting negative results when the model is not expressive enough to get perfect accuracy on the data. It must be the case that the average accuracy discrepancy tends below β(n, b) over time. We verify this experimentally on the MNIST dataset (Appendix B), showing that the average accuracy indeed drops over time when the model is weak compared to the dataset. We also confirm that the dependence of the threshold in b is indeed inversely quadratic.
4 THE ROLE OF RANDOMNESS IN GD INITIALIZATION
Our goal for this section is to show that when the amount of randomness in the perturbation is too small, for any model architecture which is differentiable and L-smooth there are inputs for which Algorithm 2 requires exponential time to terminate, even for extremely overparameterized models.
Perturbation families Let us consider a family of 2` functions indexed by length ` real valued vectors Ψ` = {ψz}z∈R` . Recall that throughout this paper we assume finite precision, thus every z can be represented using O(`) bits. We say that Ψ` is a reversible perturbation family if it holds that ∀z ∈ R`, ψz is one-to-one. We often use the notation Ψ`(W ), which means pick z ∈ R` uniformly at random, and apply ψz(W ). We often refer to Ψ` as simply a perturbation.
We note that the above captures a wide range of natural perturbations. For example ψz(W ) = W+Wz where Wz[i] = z[i mod `]. Clearly ψz(W ) is reversible.
Gradient descent The GD algorithm we analyze is formally given in Algorithm 2.
Algorithm 2: GD(W,Y, δ) Input: initial model W , dataset Y , desired accuracy δ 1 i = 1, T = o(2m) + poly(d) 2 W = Ψ`(W ) 3 while acc(W,Y ) < δ and i < T do 4 W ←W − α∇fY (W ) 5 i← i+ 1 6 Return W
Let us denote by m the number of elements in Y . We make the following 2 assumptions for the rest of this section: (1) ` = o(m). (2) There exists T = o(2m) + poly(d) and a perturbation family Ψ` such that for every input W,Y within T iterations GD terminates and returns a solution that has at least δ accuracy on Y with constant probability. We show that the above two assumptions cannot hold together. That is, if the amount of randomness is sublinear in m, there must be instances with exponential running time, even when d m. To show the above, we define a variant of SGD, which uses GD as a sub procedure (Algorithm 3). Assume that our data set is a binary classification task (it is easy to generalize our results to any number of classes), and that elements in X are assigned random labels. Furthermore, let us assume that d = o(n), e.g., d = n0.99. It holds that w.h.p we cannot train a model with d parameters that achieves any accuracy better than 1/2 + o(1) on X (Lemma A.4). Let us take to be a small constant. We show that if assumptions 1 and 2 hold, then Algorithm 3 must terminate and return a model with 1/2 + Θ(1) accuracy on X , leading to a contradiction. Our analysis follows the same line as the previous section, and uses the same notation.
Reversibility First, we must show that Algorithm 3 is still reversible. Note that we can take the same approach as before, where the only difference is that in order to get Wi,j from Wi,j+1 we must now get all the intermediate values from the call to GD. As the GD steps are applied to the same batch, this amounts to applying Lemma 2.1 several times instead of once per iteration. More specifically, we must encode for every batch a number Ti,j = o(2b) + poly(d) = o(2b) + poly(n) (recall that d = o(n)) and apply Lemma 2.1 Ti,j times.
This results in ψz(Wi,j). If we know z,Ψ` then we can retrieve ψz and efficiently retrieve Wi,j using only O(log d) = O(log n) additional bits (by iterating over all values in Fd). Therefore, in every
Algorithm 3: SGD’ 1 i← 1 // epoch counter 2 W1,1 is an initial model 3 while True do 4 Take a random permutation of X , divided into batches {Bi,j}n/bj=1 5 for j from 1 to n/b do 6 if acc(Wi,j , X) ≥ 1/2(1− ) then Return Wi,j 7 Wi,j+1 ← GD(Wi,j , Bi,j , 12(1−2 ) ) 8 i← i+ 1, Wi,1 ←Wi−1,n/b+1
iteration we have the following additional terms: log T +O(log n) + ` = o(b) +O(log n). Summing over n/b iterations we get o(n) per epoch. We state the following Lemma analogous to Lemma 3.1. Lemma 4.1. For Algorithm 3 it holds w.h.p that ∀i ∈ [t] that: K(ri |Wi+1,1, X,Ψ`) ≤ n log ne − bρi + β(n, b) + o(n).
We show that under our assumptions, Algorithm 3 must terminate, leading to a contradiction. Lemma 4.2. Algorithm 3 with b = Ω(log n) terminates within O(T ) iterations w.h.p.
Proof. Our goal is to lower bound ρi = ∑n/b+1 j=2 DKL(λ ′ i,j ‖ ϕi,j). Let us first upper bound λ′i,j . Using the fact that λ′i,j ≤ nλi,j (j−1)b (Lemma A.5) combined with the fact that λi,j ≤ 1/2(1− ) as long as the algorithm does not terminate, we get that ∀j ∈ [2, n/b+ 1] it holds that λ′i,j ≤ n2(1− )(j−1)b . Using the above we conclude that as long as we do not terminate it must hold that λ′i,j ≤ 12(1− )2 whenever j ∈ I = [(1− )n/b+ 1, n/b+ 1]. That is, λ′i,j must be close to λi,j towards the end of the epoch, and therefore must be sufficiently small. Note that |I| ≥ n/b. We know that as long as the algorithm does not terminate it holds that ϕi,j > 1/2(1− 2 ) with some constant probability. Furthermore, this probability is taken over the randomness used in the call to GD (the randomness of the perturbation). This fact allows us to use Hoeffding-type bounds for the ϕi,j variables. If ϕi,j > 1/2(1− 2 ) we say that it is good. Therefore in expectation a constant fraction of ϕi,j , j ∈ I are good. Applying a Hoeffding type bound we get that w.h.p a constant fraction of ϕi,j , j ∈ I are good. Denote these good indices by Ig ⊆ I . We are now ready to bound ρi.
ρi = n/b+1∑ j=2 DKL(λ ′ i,j ‖ ϕi,j) ≥ ∑ j∈Ig DKL(λ ′ i,j ‖ ϕi,j) ≥ ∑ j∈Ig DKL( 1 2(1− )2 ‖ 1 2(1− 2 ) )
≥ Θ(n b ) · ( 1 2(1− 2 ) − 1 2(1− )2 )2 = Θ( n b ) · 5 = Θ(n b )
Where in the transitions we used the fact that KL-divergence is non-negative, and Pinsker’s inequality. Finally, requiring that b = Ω(log n) we get that bρi−β(n, b)− o(n) = Θ(n)−Θ(n lognlog2 n )− o(n) = Θ(n). Following the same calculation as in Corollary 3.3, this guarantees termination within O( lognn ) epochs, or O(T · nb · logn n ) = O(T ) iterations (gradient descent steps).
The above leads to a contradiction. It is critical to note that the above does not hold if T = 2m = 2b or if ` = Θ(n), as both would imply that the o(n) term becomes Θ(n). We state our main theorem: Theorem 4.3. For any differentiable and L-smooth model class with d parameters and a perturbation class Ψ` such that ` = o(m) there exist an input data set Y of size m such that GD requires Ω(2m) iterations to achieve δ accuracy on Y , even if δ = 1/2 + Θ(1) and d m.
A OMITTED PROOFS AND EXPLENATIONS
A.1 REPRESENTING SETS AND PERMUTATIONS
Throughout this paper, we often consider the value K(A) where A is a set. Here the program computing A need only output the elements of A (in any order). When considering K(A | B) such that A ⊆ B, it holds that K(A | B) ≤ dlog (|B| |A| ) e+O(log |B|). To see why, consider Algorithm 4. In the algorithm iA is the index of A when considering some ordering of all subsets of B of size |A|. Thus dlog (|B| |A| ) e bits are sufficient to represent iA. The remaining variables i,mA,mB and any
Algorithm 4: Compute A given B as input 1 mA ← |A| ,mB ← |B| , i← 0, iA is a target index 2 for every subset C ⊆ B s.t |C| = mA (in a predetermined order) do 3 if i = iA then Print C 4 i← i+ 1
additional variables required to construct the set C are all of size at most O(log |B|) and there is at most a constant number of them.
During our analysis, we often bound the Kolmogorov complexity of tuples of objects. For example, K(A,P | B) where A ⊆ B is a set and P : A → [|A|] is a permutation of A (note that A,P together form an ordered tuple of the elements of A). Instead of explicitly presenting a program such as Algorithm 4, we say that if K(A | B) ≤ c1 and c2 bits are sufficient to represent P , thus K(A,P | B) ≤ c1 + c2 +O(1). This just means that we directly have a variable encoding P into the program that computes A given B and uses it in the code. For example, we can add a permutation to Algorithm 4 and output an ordered tuple of elements rather than a set. Note that when representing a permutation of A, |A| = k, instead of using functions, we can just talk about values in dlog k!e. That is, we can decide on some predetermined ordering of all permutations of k elements, and represent a permutation as its number in this ordering.
A.2 OMITTED PROOFS FOR SECTION 2
Lemma A.1. For p ∈ [0, 1] it holds that h(p) ≤ p log(e/p).
Proof. Let us write our lemma as:
h(p) = −p log p− (1− p) log(1− p) ≤ p log(e/p)
Rearranging we get:
− (1− p) log(1− p) ≤ p log p+ p log(1/p) + p log e =⇒ −(1− p) log(1− p) ≤ p log e
=⇒ − ln(1− p) ≤ p (1− p)
Note that − ln(1− p) = ∫ p 0 1 (1−x)dx ≤ p · 1 (1−p) . Where in the final transition we use the fact that
1 (1−x) is monotonically increasing on [0, 1]. This completes the proof.
Lemma A.2. For p, γ, q ∈ [0, 1] where pγ ≤ q, (1− p)γ ≤ (1− q) it holds that
qh( pγ q ) + (1− q)h( (1− p)γ (1− q) ) ≤ h(γ)− γDKL(p ‖ q)
Proof. Let us expand the left hand side using the definition of entropy:
qh( pγ q ) + (1− q)h( (1− p)γ (1− q) )
= −q(pγ q log pγ q + (1− pγ q ) log(1− pγ q ))
− (1− q)( (1− p)γ (1− q) log (1− p)γ (1− q) + (1− (1− p)γ (1− q) ) log(1− (1− p)γ (1− q) ))
= −(pγ log pγ q + (q − pγ) log q − pγ q )
− ((1− p)γ log (1− p)γ (1− q) + ((1− q)− (1− p)γ) log (1− q)− (1− p)γ 1− q )
= −γ log γ − γDKL(p ‖ q)
− (q − pγ)(log q − pγ q )− ((1− q)− (1− p)γ) log (1− q)− (1− p)γ 1− q )
Where in the last equality we simply sum the first terms on both lines. To complete the proof we use the log-sum inequality for the last expression. The log-sum inequality states that: Let {ak}mk=1 , {bk} m k=1 be non-negative numbers and let a = ∑m k=1 ak, b = ∑m k=1 bk, then ∑m k=1 ai log ai bi ≥ a log ab . We apply the log-sum inequality with m = 2, a1 = q − pγ, a2 = (1 − q) − (1 − p)γ, a = 1 − γ and b1 = q, b2 = 1− q, b = 1, getting that:
(q − pγ)(log q − pγ q ) + ((1− q)− (1− p)γ) log (1− q)− (1− p)γ 1− q ) ≥ (1− γ) log(1− γ)
Putting everything together we get that
− γ log γ − γDKL(p ‖ q)
− (q − pγ)(log q − pγ q )− ((1− q)− (1− p)γ) log (1− q)− (1− p)γ 1− q )
≤ −γ log γ − (1− γ) log(1− γ)− γDKL(p ‖ q) = h(γ)− γDKL(p ‖ q)
Lemma 2.4. Let A ⊆ B, |B| = m, |A| = γm, and let g : B → {0, 1}. For any set Y ⊆ B let Y1 = {x | x ∈ Y, g(x) = 1} , Y0 = Y \ Y1 and κY = |Y1||Y | . It holds that
K(A | B, g) ≤ mγ(log(e/γ)−DKL(κB ‖ κA)) +O(logm)
Proof. The algorithm is very similar to Algorithm 4, the main difference is that we must first compute B1, B0 from B using g, and select A1, A0 from B1, B0, respectively, using two indices iA1 , iA0 . Finally we print A = A1 ∪A0. We can now bound the number of bits required to represent iA1 , iA0 . Note that |B1| = κBm, |B0| = (1− κB)m. Note that for A1 we pick γκAm elements from κBm elements and for A0 we pick γ(1− κA)m elements from (1− κB)m elements. The number of bits required to represent this selection is:
dlog ( κBm
γκAm
) e+ dlog ( (1− κB)m γ(1− κA)m ) e ≤ κBmh( γκA κB ) + (1− κB)mh( γ(1− κA) (1− κB) )
≤ m(h(γ)− γDKL(κB ‖ κA)) ≤ mγ(log(e/γ)−DKL(κB ‖ κA)) Where in the first inequality we used the fact that ∀0 ≤ k ≤ n, log ( n k ) ≤ nh(k/n), Lemma A.2 in the second transition, and Lemma A.1 in the third transition. Note that when κA = 0, 1 We only have one term of the initial sum. For example, for κA = 1 we get:
dlog ( κBm
γκAm
) e = dlog ( κBm
γm
) e ≤ κBmh( γ
κB )
≤ mγ log(eκB/γ) = mγ(log(e/γ)− log(1/κB))
And similar computation yields mγ(log(e/γ)− log(1/(1−κB))) for κA = 0. Finally, the additional O(logm) factor is due to various counters and variables, similarly to Algorithm 4.
A.3 OMITTED PROOFS FOR SECTION 3
Lemma A.3. It holds that K(r,W1,1 | X) ≤ K(Wt+1,1) +O(t) + ∑t k=1K(rk | X,Wk+1,1).
Proof. Similarly to the definition of Y in Lemma 3.1, let Y ′ be the program which receives X, ri,Wi+1,1 as input and repeatedly applies Theorem 2.2 to retrieve Wi,1. As Y ′ just needs to reconstruct all batches from X, ri and call Y for n/b times, it holds that K(Y ′) = O(log n). Using the subadditivity and extra information properties of K(), together with the fact that W1,1 can be reconstructed given X,Wt+1,1, Y ′, we write the following:
K(r | X) ≤ K(r,W1,1, Y ′,Wt+1,1 | X) +O(1) ≤ K(W1,1,,Wt+1,1, Y ′ | X) +K(r | X,Y ′,Wt+1,1) +O(1) ≤ K(Wt+1,1 | X) +K(r | X,Y ′,Wt+1,1) +O(log n)
First, we note that: ∀i ∈ [t − 1],K(ri | X,Y ′,Wi+2,1, ri+1) ≤ K(ri | X,Y ′,Wi+1,1) + O(1). Where in the last inequality we simply execute Y ′ on X,Wi+2,1, ri+1 to get Wi+1,1. Let us write:
K(r1r2 . . . rt | X,Y ′,Wt+1,1) ≤ K(rt | X,Y ′,Wt+1,1) +K(r1r2 . . . rt−1 | X,Y ′,Wt+1,1, rt) +O(1) ≤ K(rt | X,Wt+1,1) +K(r1r2 . . . rt−1 | X,Y ′,Wt,1) +O(1) ≤ K(rt | X,Wt+1,1) +K(rt−1 | X,Wt,1) +K(r1r2 . . . rt−2 | X,Y ′,Wt−1,1) +O(1)
≤ · · · ≤ O(t) + t∑
k=1
K(rk | X,Wk+1,1)
Combining everything together we get that:
K(r | X) ≤ K(Wt+1,1) +O(t) + t∑
k=1
K(rk | X,Wk+1,1)
Corollary 3.3. If ∀k, 1k ∑k i=1 ρi > β(n, b) + γ, for γ = Ω(b
−1 log n), then w.h.p SGD terminates within O(1) epochs.
Proof. Let us simplify Inequality 1.
d+ tn log(n/e)−O(log n) ≤ d+ t[n(log(n/e) + n ·O(log n) b +O(log n)]− t∑ i=1 bρi
=⇒ −O(log n) ≤ t[n ·O(log n) b +O(log n)]− t∑ i=1 bρi
=⇒ ( t∑ i=1 ρi)− tβ(n, b) ≤ O(log n)/b
Our condition implies that ∑t i=1 ρi > t(β(n, b) + γ). This allows us to rewrite the above inequality as:
tγ ≤ O(log n)/b =⇒ t = O(1)
A.4 OMITTED PROOFS FOR SECTION 4
Lemma A.4. Let X be some set of size n and let f : X → {0, 1} be a random binary function. It holds w.h.p that there exists no function g : X → {0, 1} such that K(g | X) = o(n) and g agrees with f on n(1/2 + Θ(1)) elements in X .
Proof. Let us assume that g agrees with f on all except n elements in X and bound . Using Theorem 2.3, it holds w.h.p that K(f | X) > n−O(log n). We show that if is sufficiently far from 1/2, we can use g to compress f below its Kolmogorov complexity, arriving at a contradiction.
We can construct f using g and the set of values on which they do not agree, which we denote by D. This set is of size n and therefore can be encoded using log ( n n ) ≤ nh( ) bits (recall that
∀0 ≤ k ≤ n, log ( n k ) ≤ nh(k/n)) given X (i.e., K(D | X) ≤ nh( )). To compute f(x) using D, g we simply check if x ∈ D and output g(x) or 1− g(x) accordingly. The total number of bits required for the above is K(g,D | X) ≤ o(n) + nh( ) (where auxiliary variables are subsumed in the o(n) term). We conclude that K(f | X) ≤ o(n) + nh( ). Combining the upper and lower bounds on K(f | X), it must hold that o(n) + nh( ) ≥ n−O(log n) =⇒ h( ) ≥ 1− o(1). This inequality only holds when = 1/2 + o(1).
Lemma A.5. It holds that 1− n(1−λi,j)(j−1)b ≤ λ ′ i,j ≤ nλi,j (j−1)b .
Proof. We can write the following for j ∈ [2, n/b+ 1]: nλi,j = ∑ x∈X acc(Wi,j , x) = ∑ x∈Xi,j−1 acc(Wi,j , x) + ∑ x∈X\Xi,j−1 acc(Wi,j , x)
= (j − 1)bλ′i,j + (n− (j − 1)b)λ′′i,j
=⇒ λ′i,j = nλi,j − (n− (j − 1)b)λ′′i,j
(j − 1)b
Setting λ′′i,j = 0 we get
λ′i,j = nλi,j − (n− (j − 1)b)λ′′i,j
(j − 1)b ≤ nλi,j (j − 1)b
And setting λ′′i,j = 1 we get
λ′i,j = nλi,j − (n− (j − 1)b)λ′′i,j (j − 1)b ≥ 1− n(1− λi,j) (j − 1)b
B EXPERIMENTS
Experimental setup We perform experiments on MNIST dataset and the same data set with random labels (MNIST-RAND). We use SGD with learning rate 0.01 without momentum or regularization. We use a simple fully connected architecture with a single hidden layer, GELU activation units (a differentiable alternative to ReLU) and cross entropy loss. We run experiments with a hidden layer of size 2, 5, 10. We consider batches of size 50, 100, 200. For each of the datasets we run experiments for all configurations of architecture sizes and batch sizes for 300 epochs.
Results Figure 2 and Figure 3 show the accuracy discrepancy and accuracy over epochs for all configurations for MNIST and MNIST-RAND respectively. Figure 4 and Figure 5 show for every batch size the accuracy discrepancy of all three model sizes on the same plot. All of the values displayed are averaged over epochs, i.e., the value for epoch t is 1t ∑ i xi.
First, we indeed observe that the scale of the accuracy discrepancy is inversely quadratic in the batch size, as our analysis suggests. Second, for MNIST-RAND we can clearly see that the average accuracy discrepancy tends below a certain threshold over time, where the threshold appears to be independent of the number of model parameters. We see similar results for MNIST when the model is small, but not when it is large. This is because the model does not reach its capacity within the timeframe of our experiment. | 1. What is the focus and contribution of the paper regarding dynamics of mini-batch SGD?
2. What are the strengths and weaknesses of the proposed approach, particularly in its theoretical analysis?
3. Do you have any concerns about the assumptions made in the paper, such as invertibility?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The paper considers studies dynamics of mini-batch SGD using Kolmogorov complexity. They define a notion of accuracy discrepancy as a KL-divergence between accuracy of the model on previous batches in the current epoch and on the last batch. Using the fact that the random strings used for generating the epoch’s permutation are incompressible, they bound the accuracy discrepancy throughout the algorithm execution.
Strengths And Weaknesses
I have the following concerns about the paper:
-- As I understand, the discussion near Theorem 2.2 is really important, since it shows that the provess is invertible. Moreover, the algorithm actually relies on the fact that the set of possible values is finite, to find the original point in finite time.
However, the conditions in Theorem 2.2 don’t necessarily hold in practice. To give an example, for f(x)=x^2 / 2, with the step size 1e-16, for the double type, the gradient step starting from points 1e16 + 2 and 1e16 produce the same result.
While I understand that the example is artificial, it suffices to show that the process is, in general, not invertible (when working in doubles). I’m not sure, but it might be important for the rest of the paper, since you do rely on the invertibility. I think that it should be clarified why the conditions of Theorem 2.2 actually hold.
-- “It is clear that w.h.p we cannot train a model with d parameters that achieves any accuracy better than 1/2 + o(1) on X” - I’m not sure this statement is true. You probably should use non-random labels and counting argument.
Clarity, Quality, Novelty And Reproducibility
Minor issues: -- Theorem 4.3: I don’t understand what ½ + θ(1) means -- Instead of
∇
f
A
~
(and others), I would write
∇
f
~
A
. The long tilde looks really weird -- I think “w.h.p.” is commonly defined as 1 - n^-c for an arbitrary large c -- Page 6:” Where in the above we used Stirling’s approximation” and “Where the efficiency of reconstruction is expressed via ρ_i”: should start commas (instead of being new sentences). -- I think the relation between f and acc was never mentioned |
ICLR | Title
SGD Through the Lens of Kolmogorov Complexity
Abstract
We initiate a thorough study of the dynamics of stochastic gradient descent (SGD) under minimal assumptions using the tools of entropy compression. Specifically, we characterize a quantity of interest which we refer to as the accuracy discrepancy. Roughly speaking, this measures the average discrepancy between the model accuracy on batches and large subsets of the entire dataset. We show that if this quantity is sufficiently large, then SGD finds a model which achieves perfect accuracy on the data in O(1) epochs. On the contrary, if the model cannot perfectly fit the data, this quantity must remain below a global threshold, which only depends on the size of the dataset and batch. We use the above framework to lower bound the amount of randomness required to allow (non-stochastic) gradient descent to escape from local minima using perturbations. We show that even if the model is extremely overparameterized, at least a linear (in the size of the dataset) number of random bits are required to guarantee that GD escapes local minima in subexponential time.
1 INTRODUCTION
Stochastic gradient descent (SGD) is at the heart of modern machine learning. However, we are still lacking a theoretical framework that explains its performance for general, non-convex functions. Current results make significant assumptions regarding the model. Global convergence guarantees only hold under specific architectures, activation units, and when models are extremely overparameterized (Du et al., 2019; Allen-Zhu et al., 2019; Zou et al., 2018; Zou and Gu, 2019). In this paper, we take a step back and explore what can be said about SGD under the most minimal assumptions. We only assume that the loss function is differentiable and L-smooth, the learning rate is sufficiently small and that models are initialized randomly. Clearly, we cannot prove general convergence to a global minimum under these assumptions. However, we can try and understand the dynamics of SGD - what types of execution patterns can and cannot happen.
Motivating example: Suppose hypothetically, that for every batch, the accuracy of the model after the Gradient Descent (GD) step on the batch is 100%. However, its accuracy on the set of previously seen batches (including the current batch) remains at 80%. Can this process go on forever? At first glance, this might seem like a possible scenario. However, we show that this cannot be the case. That is, if the above scenario repeats sufficiently often the model must eventually achieve 100% accuracy on the entire dataset.
To show the above, we identify a quantity of interest which we call the accuracy discrepancy (formally defined in Section 3). Roughly speaking, this is how much the model accuracy on a batch differs from the model accuracy on all previous batches in the epoch. We show that when this quantity (averaged over epochs) is higher than a certain threshold, we can guarantee that SGD convergence to 100% accuracy on the dataset within O(1) epochs w.h.p1. We note that this threshold is global, that is, it only depends on the size of the dataset and the size of the batch. In doing so, we provide a sufficient condition for SGD convergence.
The above result is especially interesting when applied to weak models that cannot achieve perfect accuracy on the data. Imagine a dataset of size n with random labels, a model with n0.99 parameters, and a batch of size log n. The above implies that the accuracy discrepancy must eventually go below
1With high probability means a probability of at least 1− 1/n, where n is the size of the dataset.
the global threshold. In other words, the model cannot consistently make significant progress on batches. This is surprising because even though the model is underparameterized with respect to the entire dataset, it is extremely overparameterized with respect to the batch. We verify this observation experimentally (Appendix B). This holds for a single GD step, but what if we were to allow many GD steps per batch, would this mean that we still cannot make significant progress on the batch? This leads us to consider the role of randomness in (non-stochastic) gradient descent.
It is well known that overparameterized models trained using SGD can perfectly fit datasets with random labels (Zhang et al., 2017). It is also known that when models are sufficiently overparameterized (and wide) GD with random initialization convergences to a near global minimum (Du et al., 2019). This leads to an interesting question: how much randomness does GD require to escape local minima efficiently (in polynomial time)? It is obvious that without randomness we could initialize GD next to a local minimum, and it will never escape it. However, what about the case where we are provided an adversarial input and we can perturb that input (for example, by adding a random vector to it), how many bits of randomness are required to guarantee that after the perturbation GD achieves good accuracy on the input in polynomial time?
In Section 4 we show that if the amount of randomness is sublinear in the size of the dataset, then for any differentiable and L-smooth model class (e.g., a neural network architecture), there are datasets that require an exponential running time to achieve any non-trivial accuracy (i.e., better than 1/2 + o(1) for a two-class classification task), even if the model is extremely overparameterized. This result highlights the importance of randomness for the convergence of gradient methods. Specifically, it provides an indication of why SGD converges in certain situations and GD does not. We hope this result opens the door to the design of randomness in other versions of GD.
Outline of our techniques We consider batch SGD, where the dataset is shuffled once at the beginning of each epoch and then divided into batches. We do not deal with the generalization abilities of the model. Thus, the dataset is always the training set. In each epoch, the algorithm goes over the batches one by one, and performs gradient descent to update the model. This is the "vanilla" version of SGD, without any acceleration or regularization (for a formal definition, see Section 2). For the sake of analysis, we add a termination condition after every GD step: if the accuracy on the entire dataset is 100% we terminate. Thus, in our case, termination implies 100% accuracy.
To achieve our results, we make use of entropy compression, first considered by Moser and Tardos (2010) to prove a constructive version of the Lovász local lemma. Roughly speaking, the entropy compression argument allows one to bound the running time of a randomized algorithm2 by leveraging the fact that a random string of bits (the randomness used by the algorithm) is computationally incompressible (has high Kolmogorov complexity) w.h.p. If one can show that throughout the execution of the algorithm, it (implicitly) compresses the randomness it uses, then one can bound the number of iterations the algorithm may execute without terminating. To show that the algorithm has such a property, one would usually consider the algorithm after executing t iterations, and would try to show that just by looking at an "execution log" of the algorithm and some set of "hints", whose size together is considerably smaller than the number of random bits used by the algorithm, it is possible to reconstruct all of the random bits used by the algorithm.
We apply this approach to SGD with an added termination condition when the accuracy over the entire dataset is 100%. Thus, termination in our case guarantees perfect accuracy. The randomness we compress is the bits required to represent the random permutation of the data at every epoch. So indeed the longer SGD executes, the more random bits are generated. We show that under our assumptions it is possible to reconstruct these bits efficiently starting from the dataset X and the model after executing t epochs. The first step in allowing us to reconstruct the random bits of the permutation in each epoch is to show that under the L-smoothness assumption and a sufficiently small step size, SGD is reversible. That is, if we are given a model Wi+1 and a batch Bi such that Wi+1 results from taking a gradient step with model Wi where the loss is calculated with respect to Bi, then we can uniquely retrieve Wi using only Bi and Wi+1. This means that if we can efficiently encode the batches used in every epoch (i.e., using less bits than encoding the entire permutation of the data), we can also retrieve all intermediate models in that epoch (at no additional cost). We prove this claim in Section 2.
2We require that the number of the random bits used is proportional to the execution time of the algorithm. That is, the algorithm flips coins for every iteration of a loop, rather than just a constant number at the beginning of the execution.
The crux of this paper is to show that when the accuracy discrepancy is high for a certain epoch, the batches can indeed be compressed. To exemplify our techniques let us consider the scenario where, in every epoch, just after a single GD step on a batch we consistently achieve perfect accuracy on the batch. Let us consider some epoch of our execution, assume we have access to X , and let Wf be the model at the end of the epoch. If the algorithm did not terminate, then Wf has accuracy at most 1− on the entire dataset (assume for simplicity that is a constant). Our goal is to retrieve the last batch of the epoch, Bf ⊂ X (without knowing the permutation of the data for the epoch). A naive approach would be to simply encode the indices in X of the elements in the batch. However, we can use Wf to achieve a more efficient encoding. Specifically, we know that Wf achieves 1.0 accuracy on Bf but only 1− accuracy on X . Thus it is sufficient to encode the elements of Bf using a smaller subset of X (the elements classified correctly by Wf , which has size at most (1− ) |X|). This allows us to significantly compress Bf . Next, we can use Bf and Wf together with the reversibility of SGD to retrieve Wf−1. We can now repeat the above argument to compress Bf−1 and so on, until we are able to reconstruct all of the random bits used to generate the permutation of X in the epoch. This will result in a linear reduction in the number of bits required for the encoding.
In our analysis, we show a generalized version of the scenario above. We show that high accuracy discrepancy implies that entropy compression occurs. For our second result, we consider a modified SGD algorithm that instead of performing a single GD step per batch, first perturbs the batch with a limited amount of randomness and then performs GD until a desired accuracy on the batch is reached. We assume towards contradiction that GD can always reach the desired accuracy on the batch in subexponential time. This forces the accuracy discrepancy to be high, which guarantees that we always find a model with good accuracy. Applying this reasoning to models of sublinear size and data with random labels we arrive at a contradiction, as such models cannot achieve good accuracy on the data. This implies that when we limit the amount of randomness GD can use for perturbations, there must exist instances where GD requires exponential running time to achieve good accuracy.
Related work There has been a long line of research proving convergence bounds for SGD under various simplifying assumptions such as: linear networks (Arora et al., 2019; 2018), shallow networks (Safran and Shamir, 2018; Du and Lee, 2018; Oymak and Soltanolkotabi, 2019), etc. However, the most general results are the ones dealing with deep, overparameterized networks (Du et al., 2019; Allen-Zhu et al., 2019; Zou et al., 2018; Zou and Gu, 2019). All of these works make use of NTK (Neural Tangent Kernel)(Jacot et al., 2018) and show global convergence guarantees for SGD when the hidden layers have width at least poly(n,L) where n is the size of the dataset and L is the depth of the network. We note that the exponents of the polynomials are quite large.
A recent line of work by Zhang et al. (2022) notes that in many real world scenarios models do not converge to stationary points. They instead take a different approach which, similar to us, studies the dynamics of neural networks. They show that under certain assumptions (e.g., considering a fully connected architecture with sub-differentiable and coordinate-wise Lipschitz activations and weights laying on a compact set) the change in training loss gradually converges to 0, even if the full gradient norms do not vanish.
In (Du et al., 2017) it was shown that GD can take exponential time to escape saddle points, even under random initialization. They provide a highly engineered instance, while our results hold for many model classes of interest. Jin et al. (2017) show that adding perturbations during the executions of GD guarantees that it escapes saddle points. This is done by occasionally perturbing the parameters within a ball of radius r, where r depends on the properties of the function to be optimized. Therefore, a single perturbation must require an amount of randomness linear in the number of parameters.
2 PRELIMINARIES
We consider the following optimization problem. We are given an input (dataset) of size n. Let us denote X = {xi}ni=1 (Our inputs contain both data and labels, we do not need to distinguish them for this work). We also associate every x ∈ X with a unique id of dlog ne bits. We often consider batches of the input B ⊂ X . The size of the batch is denoted by b (all batches have the same size). We have some model whose parameters are denoted by W ∈ Rd, where d is the model dimension. We aim to optimize a goal function of the following type: f(W ) = 1n ∑ x∈X fx(W ), where the functions fx : Rd → R are completely determined by x ∈ X . We also define for every set A ⊆ X: fA(W ) = 1 |A| ∑ x∈A fx(W ). Note that fX = f .
We denote by acc(W,A) : Rd × 2X → [0, 1] the accuracy of model W on the set A ⊆ X (where we use W to classify elements from X). Note that for x ∈ X it holds that acc(W,x) is a binary value indicating whether x is classified correctly or not. We require that every fx is differentiable and L-smooth: ∀W1,W2 ∈ Rd, ‖∇fx(W1)−∇fx(W2)‖ ≤ L‖W1 −W2‖. This implies that every fA is also differentiable and L-smooth. To see this consider the following:
‖∇fA(W1)−∇fA(W2)‖ = ‖ 1 |A| ∑ x∈A ∇fx(W1)− 1 |A| ∑ x∈A ∇fx(W2)‖
= 1 |A| ‖ ∑ x∈A ∇fx(W1)−∇fx(W2)‖ ≤ 1 |A| ∑ x∈A ‖∇fx(W1)−∇fx(W2)‖ ≤ L‖W1 −W2‖
We state another useful property of fA:
Lemma 2.1. Let W1,W2 ∈ Rd and α < 1/L. For any A ⊆ X , if it holds that W1 − α∇fA(W1) = W2 − α∇fA(W2) then W1 = W2.
Proof. Rearranging the terms we get thatW1−W2 = α∇fA(W1)−α∇fA(W2). Now let us consider the norm of both sides: ‖W1−W2‖ = ‖α∇fA(W1)−α∇fA(W2)‖ ≤ α·L‖W1−W2‖ < ‖W1−W2‖ Unless W1 = W2, the final strict inequality holds which leads to a contradiction.
The above means that for a sufficiently small gradient step, the gradient descent process is reversible. That is, we can always recover the previous model parameters given the current ones, assuming that the batch is fixed. We use the notion of reversibility throughout this paper. However, in practice we only have finite precision, thus instead of R we work with the finite set F ⊂ R. Furthermore, due to numerical stability issues, we do not have access to exact gradients, but only to approximate values ∇̂fA. For the rest of this paper, we assume these values are L-smooth on all elements in Fd. That is,
∀W1,W2 ∈ Fd, A ⊆ X, ‖∇̂fA(W1)− ∇̂fA(W2)‖ ≤ L‖W1 −W2‖
This immediately implies that Lemma 2.1 holds even when precision is limited. Let us state the following theorem:
Theorem 2.2. Let W1,W2, ...,Wk ∈ Fd ⊂ Rd, A1, A2, ..., Ak ⊆ X and α < 1/L. If it holds that Wi = Wi−1 − α∇̂fAi−1(Wi−1), then given A1, A2, ..., Ak−1 and Wk we can retrieve W1.
Proof. Given Wk we iterate over all W ∈ Fd until we find W such that Wk = W − α∇̂fAi−1(W ). Using Lemma 2.1, there is only a single element such that this equality holds, and thus W = Wk−1. We repeat this process until we retrieve W1.
SGD We analyze the classic SGD algorithm presented in Algorithm 1. One difference to note in our algorithm, compared to the standard implementation, is the termination condition when the accuracy on the dataset is 100%. In practice the termination condition is not used, however, we only use it to prove that at some point in time the accuracy of the model is 100%.
Algorithm 1: SGD 1 i← 1 // epoch counter 2 W1,1 is an initial model 3 while True do 4 Take a random permutation of X , divided into batches {Bi,j}n/bj=1 5 for j from 1 to n/b do 6 if acc(Wi,j , X) = 1 then Return Wi,j 7 Wi,j+1 ←Wi,j − α∇fBi,j (Wi,j) 8 i← i+ 1, Wi,1 ←Wi−1,n/b+1
Kolmogorov complexity The Kolmogorov complexity of a string x ∈ {0, 1}∗, denoted by K(x), is defined as the size of the smallest prefix Turing machine which outputs this string. We note that this definition depends on which encoding of Turing machines we use. However, one can show that this will only change the Kolmogorov complexity by a constant factor (Li and Vitányi, 2019).
We also use the notion of conditional Kolmogorov complexity, denoted by K(x | y). This is the length of the shortest prefix Turing machine which gets y as an auxiliary input and prints x. Note that the length of y does not count towards the size of the machine which outputs x. So it can be the case that |x| |y| but it holds that K(x | y) < K(x). We can also consider the Kolmogorov complexity of functions. Let g : {0, 1}∗ → {0, 1}∗ then K(g) is the size of the smallest Turing machine which computes the function g.
The following properties of Kolmogorov complexity will be of use. Let x, y, z be three strings:
• Extra information: K(x | y, z) ≤ K(x | z) +O(1) ≤ K(x, y | z) +O(1) • Subadditivity: K(xy | z) ≤ K(x | z, y)+K(y | z)+O(1) ≤ K(x | z)+K(y | z)+O(1)
Random strings have the following useful property (Li and Vitányi, 2019): Theorem 2.3. For an n bit string x chosen uniformly at random, and some string y independent of x (i.e., y is fixed before x is chosen) and any c ∈ N it holds that Pr[K(x | y) ≥ n− c] ≥ 1− 1/2c.
Entropy and KL-divergence Our proofs make extensive use of binary entropy and KL-divergence. In what follows we define these concepts and provide some useful properties.
Entropy: For p ∈ [0, 1] we denote by h(p) = −p log p− (1− p) log(1− p) the entropy of p. Note that h(0) = h(1) = 0.
KL-divergence: For p, q ∈ (0, 1) let DKL(p ‖ q) = p log pq + (1 − p) log 1−p 1−q be the Kullback Leibler divergence (KL-divergence) between two Bernoulli distributions with parameters p, q. We also extend the above for the case where q, p ∈ {0, 1} as follows: DKL(1 ‖ q) = DKL(0 ‖ q) = 0, DKL(p ‖ 1) = log(1/p), DKL(p ‖ 0) = log(1/(1 − p)). This is just notation that agrees with Lemma 2.4. We also state the following result of Pinsker’s inequality applied to Bernoulli random variables: DKL(p ‖ q) ≥ 2(p− q)2. Representing sets Let us state some useful bounds on the Kolmogorov complexity of sets. A more detailed explanation regarding the Kolmogorov complexity of sets and permutations together with the proof to the lemma below appears in Appendix A. Lemma 2.4. Let A ⊆ B, |B| = m, |A| = γm, and let g : B → {0, 1}. For any set Y ⊆ B let Y1 = {x | x ∈ Y, g(x) = 1} , Y0 = Y \ Y1 and κY = |Y1||Y | . It holds that
K(A | B, g) ≤ mγ(log(e/γ)−DKL(κB ‖ κA)) +O(logm)
3 ACCURACY DISCREPANCY
First, let us define some useful notation (Wi,j , Bi,j are formally defined in Algorithm 1):
• λi,j = acc(Wi,j , X). This is the accuracy of the model in epoch i on the entire dataset X , before performing the GD step on batch j.
• ϕi,j = acc(Wi,j , Bi,j−1). This is the accuracy of the model on the (j − 1)-th batch in the i-th epoch after performing the GD step on the batch.
• Xi,j = ⋃j k=1Bi,k (note that ∀i,Xi,0 = ∅, Xi,n/b = X). This is the set of elements in the first j
batches of epoch i. Let us also denote nj = |Xi,j | = jb (Note that ∀j, i1, i2, |Xi1,j | = |Xi2,j |, thus i need not appear in the subscript).
• λ′i,j = acc(Wi,j , Xi,j−1), λ ′′ i,j = acc(Wi,j , X \Xi,j−1), where λ′i,j is the accuracy of the model
on the set of all previously seen batch elements, after performing the GD step on the (j − 1)-th batch and λ′′i,j is the accuracy of the same model, on all remaining elements (j-th batch onward). To avoid computing the accuracy on empty sets, λ′i,j is defined for j ∈ [2, n/b + 1] and λ′′i,j is defined for j ∈ [1, n/b]. • ρi,j = DKL(λ′i,j ‖ ϕi,j) is the accuracy discrepancy for the j-th batch in iteration i and ρi =∑n/b+1 j=2 ρi,j is the accuracy discrepancy at iteration i.
In our analysis, we consider t epochs of the SGD algorithm. Our goal for this section is to derive a connection between ∑t i=1 ρi and t. Bounding t: Our goal is to use the entropy compression argument to show that if ∑t i=1 ρi is sufficiently large we can bound t. Let us start by formally defining the random bits which the algorithm uses. Let ri be the string of random bits representing the random permutation of X at epoch i. As we consider t epochs, let r = r1r2 . . . rt.
Note that the number of bits required to represent an arbitrary permutation of [n] is given by: dlog(n!)e = n log n− n log e+O(log n) = n log(n/e) +O(log n),
where in the above we used Stirling’s approximation. Thus, it holds that |r| = t(n log(n/e) + O(log n)) and according to Theorem 2.3, with probability at least 1 − 1/n2 it holds that K(r) ≥ tn log(n/e)−O(log n). In the following lemma we show how to use the model at every iteration to efficiently reconstruct the batch at that iteration, where the efficiency of reconstruction is expressed via ρi. Lemma 3.1. It holds w.h.p that ∀i ∈ [t] that: K(ri |Wi+1,1, X) ≤ n log ne − bρi + n b ·O(log n)
Proof. Recall that Bi,j is the j-th batch in the i-th epoch, and let Pi,j be a permutation of Bi,j such that the order of the elements in Bi,j under Pi,j is the same as under ri. Note that given X , if we know the partition into batches and all permutations, we can reconstruct ri. According to Theorem 2.2, given Wi,j and Bi,j−1 we can compute Wi,j−1. Let us denote by Y the encoding of this procedure. To implement Y we need to iterate over all possible vectors in Fd and over batch elements to compute the gradients. To express this program we require auxiliary variables of size at most O(log min {d, b}) = O(log n). Thus it holds that K(Y ) = O(log n). Let us abbreviate Bi,1, Bi,2, ..., Bi,j as (Bi,k) j k=1. We write the following. K(ri | X,Wi+1,1) ≤ K(ri, Y | X,Wi+1,1) +O(1) ≤ K(ri | X,Wi+1,1, Y ) +K(Y | X,Wi+1,1) +O(1)
≤ O(log n) +K((Bi,k, Pi,k)n/bk=1 | X,Wi+1,1, Y )
≤ O(log n) +K((Bi,k)n/bk=1 | X,Wi+1,1, Y ) +K((Pi,k) n/b k=1 | X,Wi+1,1, Y )
≤ O(log n) +K((Bi,k)n/bk=1 | X,Wi+1,1, Y ) + n/b∑ j=1 K(Pi,j)
Let us bound K((Bi,k) n/b k=1 | X,Wi+1,1, Y ) by repeatedly using the subadditivity and extra information properties of Kolmogorov complexity.
K((Bi,k) n/b k=1 | X,Y,Wi+1,1) ≤ K(Bi,n/b | X,Wi+1,1) +K((Bi,k) n/b−1 k=1 | X,Y,Wi+1,1, Bi,n/b) +O(1)
≤ K(Bi,n/b | X,Wi+1,1) +K((Bi,k) n/b−1 k=1 | X,Y,Wi,n/b, Bi,n/b) +O(1) ≤ K(Bi,n/b | X,Wi+1,1) +K(Bi,n/b−1 | X,Wi,n/b, Bi,n/b)
+K((Bi,k) n/b−2 k=1 | X,Y,Wi,n/b−1, Bi,n/b, Bi,n/b−1) +O(1)
≤ ... ≤ O(n b ) + n/b∑ j=1 K(Bi,j | X,Wi,j+1, (Bi,k)n/bk=j+1) ≤ O( n b ) + n/b∑ j=1 K(Bi,j | Xi,j ,Wi,j+1)
where in the transitions we used the fact that given Wi,j , Bi,j−1 and Y we can retrieve Wi,j−1. That is, we can always bound K(... | Y,Wi,j , Bi,j−1, ...) by K(... | Y,Wi,j−1, Bi,j−1, ...) +O(1). To encode the order Pi,j inside each batch, b log(b/e) +O(log b) bits are sufficient. Finally we get that: K(ri | X,Wi+1,1) ≤ O(nb ) + ∑n/b j=1[K(Bi,j | Xi,j ,Wi,j+1) + b log(b/e) +O(log b)].
Let us now boundK(Bi,j−1 | Xi,j−1,Wi,j). KnowingXi,j−1 we know thatBi,j−1 ⊆ Xi,j−1. Thus we need to use Wi,j to compress Bi,j−1. Applying Lemma 2.4 with parameters A = Bi,j−1, B = Xi,j−1, γ = b/nj−1, κA = ϕi,j , κB = λ ′ i,j and g(x) = acc(Wi,j , x). We get the following:
K(Bi,j−1 | Xi,j−1,Wi,j) ≤ b(log( e · nj−1
b )− ρi,j) +O(log nj−1)
Adding b log(b/e) +O(log b) to the above, we get the following bound on every element in the sum: b(log( e · nj−1
b )− ρi,j) + b log(b/e) +O(log b) +O(log nj−1) ≤ b log nj−1 − bρi,j +O(log nj−1)
Note that the most important term in the sum is −bρi,j . That is, the more the accuracy of Wi,j on the batch, Bi,j−1, differs from the accuracy of Wi,j on the set of elements containing the batch, Xi,j−1, we can represent the batch more efficiently. Let us now bound the sum: ∑n/b+1 j=2 [b log nj−1 − bρi,j + O(log nj−1)]. Let us first bound the sum over b log nj−1: n/b+1∑ j=2 b log nj−1 = n/b∑ j=1 b log jb = n/b∑ j=1 b(log b+ log j)
= n log b+ b log(n/b)! = n log b+ n log n
b · e +O(log n) = n log
n e +O(log n)
Finally, we can write that:
K(ri | X,Wi+1,1) ≤ O( n
b ) + n/b+1∑ j=2 [b log nj−1 − bρi,j +O(log n)] ≤ n log n e − bρi + n b ·O(log n)
Using the above we know that when the value ρi is sufficiently high, the random permutation of the epoch can be compressed. We use the fact that random strings are incompressible to bound 1 t ∑t i=1 ρi. Theorem 3.2. If the algorithm does not terminate by the t-th iteration, then it holds w.h.p that ∀t, 1t ∑t i=1 ρi ≤ O( n logn b2 ).
Proof. Using arguments similar to Lemma 3.1, we can show that K(r,W1,1 | X) ≤ K(Wt+1,1) + O(t)+ ∑t k=1K(rk | X,Wk+1,1) (formally proved in Lemma A.3). Combining this with Lemma 3.1, we get that K(r,W1,1 | X) ≤ K(Wt+1,1) + t[n(log(n/e) + n·O(logn)b − bρi +O(log n)]. Our proof implies that we can reconstruct not only r, but also W1,1 using X,Wt+1,1. Due to the incompressibility of random strings, we get that w.h.pK(r,W1,1 | X) ≥ d+tn log(n/e)−O(log n). Combining the lower and upper bound for K(r,W1,1 | X) we can get the following inequality:
d+ tn log(n/e)−O(log n) ≤ d+ t[n(log(n/e) + n ·O(log n) b +O(log n)]− t∑ i=1 bρi (1)
=⇒ 1 t t∑ i=1 ρi ≤ n ·O(log n) b2 + O(log n)
b︸ ︷︷ ︸ β(n,b)
+ O(log n)
bt = O(
n log n
b2 )
Let β(n, b) be the exact value of the asymptotic expression in Inequality 1. Theorem 3.2 says that as long as SGD does not terminate the average accuracy discrepeancy cannot be too high. Using the contra-positive we get the following useful corollary (proof is deferred to Appendix A.3).
Corollary 3.3. If ∀k, 1k ∑k i=1 ρi > β(n, b) + γ, for γ = Ω(b
−1 log n), then w.h.p SGD terminates within O(1) epochs.
The case for weak models Using the above we can also derive some interesting negative results when the model is not expressive enough to get perfect accuracy on the data. It must be the case that the average accuracy discrepancy tends below β(n, b) over time. We verify this experimentally on the MNIST dataset (Appendix B), showing that the average accuracy indeed drops over time when the model is weak compared to the dataset. We also confirm that the dependence of the threshold in b is indeed inversely quadratic.
4 THE ROLE OF RANDOMNESS IN GD INITIALIZATION
Our goal for this section is to show that when the amount of randomness in the perturbation is too small, for any model architecture which is differentiable and L-smooth there are inputs for which Algorithm 2 requires exponential time to terminate, even for extremely overparameterized models.
Perturbation families Let us consider a family of 2` functions indexed by length ` real valued vectors Ψ` = {ψz}z∈R` . Recall that throughout this paper we assume finite precision, thus every z can be represented using O(`) bits. We say that Ψ` is a reversible perturbation family if it holds that ∀z ∈ R`, ψz is one-to-one. We often use the notation Ψ`(W ), which means pick z ∈ R` uniformly at random, and apply ψz(W ). We often refer to Ψ` as simply a perturbation.
We note that the above captures a wide range of natural perturbations. For example ψz(W ) = W+Wz where Wz[i] = z[i mod `]. Clearly ψz(W ) is reversible.
Gradient descent The GD algorithm we analyze is formally given in Algorithm 2.
Algorithm 2: GD(W,Y, δ) Input: initial model W , dataset Y , desired accuracy δ 1 i = 1, T = o(2m) + poly(d) 2 W = Ψ`(W ) 3 while acc(W,Y ) < δ and i < T do 4 W ←W − α∇fY (W ) 5 i← i+ 1 6 Return W
Let us denote by m the number of elements in Y . We make the following 2 assumptions for the rest of this section: (1) ` = o(m). (2) There exists T = o(2m) + poly(d) and a perturbation family Ψ` such that for every input W,Y within T iterations GD terminates and returns a solution that has at least δ accuracy on Y with constant probability. We show that the above two assumptions cannot hold together. That is, if the amount of randomness is sublinear in m, there must be instances with exponential running time, even when d m. To show the above, we define a variant of SGD, which uses GD as a sub procedure (Algorithm 3). Assume that our data set is a binary classification task (it is easy to generalize our results to any number of classes), and that elements in X are assigned random labels. Furthermore, let us assume that d = o(n), e.g., d = n0.99. It holds that w.h.p we cannot train a model with d parameters that achieves any accuracy better than 1/2 + o(1) on X (Lemma A.4). Let us take to be a small constant. We show that if assumptions 1 and 2 hold, then Algorithm 3 must terminate and return a model with 1/2 + Θ(1) accuracy on X , leading to a contradiction. Our analysis follows the same line as the previous section, and uses the same notation.
Reversibility First, we must show that Algorithm 3 is still reversible. Note that we can take the same approach as before, where the only difference is that in order to get Wi,j from Wi,j+1 we must now get all the intermediate values from the call to GD. As the GD steps are applied to the same batch, this amounts to applying Lemma 2.1 several times instead of once per iteration. More specifically, we must encode for every batch a number Ti,j = o(2b) + poly(d) = o(2b) + poly(n) (recall that d = o(n)) and apply Lemma 2.1 Ti,j times.
This results in ψz(Wi,j). If we know z,Ψ` then we can retrieve ψz and efficiently retrieve Wi,j using only O(log d) = O(log n) additional bits (by iterating over all values in Fd). Therefore, in every
Algorithm 3: SGD’ 1 i← 1 // epoch counter 2 W1,1 is an initial model 3 while True do 4 Take a random permutation of X , divided into batches {Bi,j}n/bj=1 5 for j from 1 to n/b do 6 if acc(Wi,j , X) ≥ 1/2(1− ) then Return Wi,j 7 Wi,j+1 ← GD(Wi,j , Bi,j , 12(1−2 ) ) 8 i← i+ 1, Wi,1 ←Wi−1,n/b+1
iteration we have the following additional terms: log T +O(log n) + ` = o(b) +O(log n). Summing over n/b iterations we get o(n) per epoch. We state the following Lemma analogous to Lemma 3.1. Lemma 4.1. For Algorithm 3 it holds w.h.p that ∀i ∈ [t] that: K(ri |Wi+1,1, X,Ψ`) ≤ n log ne − bρi + β(n, b) + o(n).
We show that under our assumptions, Algorithm 3 must terminate, leading to a contradiction. Lemma 4.2. Algorithm 3 with b = Ω(log n) terminates within O(T ) iterations w.h.p.
Proof. Our goal is to lower bound ρi = ∑n/b+1 j=2 DKL(λ ′ i,j ‖ ϕi,j). Let us first upper bound λ′i,j . Using the fact that λ′i,j ≤ nλi,j (j−1)b (Lemma A.5) combined with the fact that λi,j ≤ 1/2(1− ) as long as the algorithm does not terminate, we get that ∀j ∈ [2, n/b+ 1] it holds that λ′i,j ≤ n2(1− )(j−1)b . Using the above we conclude that as long as we do not terminate it must hold that λ′i,j ≤ 12(1− )2 whenever j ∈ I = [(1− )n/b+ 1, n/b+ 1]. That is, λ′i,j must be close to λi,j towards the end of the epoch, and therefore must be sufficiently small. Note that |I| ≥ n/b. We know that as long as the algorithm does not terminate it holds that ϕi,j > 1/2(1− 2 ) with some constant probability. Furthermore, this probability is taken over the randomness used in the call to GD (the randomness of the perturbation). This fact allows us to use Hoeffding-type bounds for the ϕi,j variables. If ϕi,j > 1/2(1− 2 ) we say that it is good. Therefore in expectation a constant fraction of ϕi,j , j ∈ I are good. Applying a Hoeffding type bound we get that w.h.p a constant fraction of ϕi,j , j ∈ I are good. Denote these good indices by Ig ⊆ I . We are now ready to bound ρi.
ρi = n/b+1∑ j=2 DKL(λ ′ i,j ‖ ϕi,j) ≥ ∑ j∈Ig DKL(λ ′ i,j ‖ ϕi,j) ≥ ∑ j∈Ig DKL( 1 2(1− )2 ‖ 1 2(1− 2 ) )
≥ Θ(n b ) · ( 1 2(1− 2 ) − 1 2(1− )2 )2 = Θ( n b ) · 5 = Θ(n b )
Where in the transitions we used the fact that KL-divergence is non-negative, and Pinsker’s inequality. Finally, requiring that b = Ω(log n) we get that bρi−β(n, b)− o(n) = Θ(n)−Θ(n lognlog2 n )− o(n) = Θ(n). Following the same calculation as in Corollary 3.3, this guarantees termination within O( lognn ) epochs, or O(T · nb · logn n ) = O(T ) iterations (gradient descent steps).
The above leads to a contradiction. It is critical to note that the above does not hold if T = 2m = 2b or if ` = Θ(n), as both would imply that the o(n) term becomes Θ(n). We state our main theorem: Theorem 4.3. For any differentiable and L-smooth model class with d parameters and a perturbation class Ψ` such that ` = o(m) there exist an input data set Y of size m such that GD requires Ω(2m) iterations to achieve δ accuracy on Y , even if δ = 1/2 + Θ(1) and d m.
A OMITTED PROOFS AND EXPLENATIONS
A.1 REPRESENTING SETS AND PERMUTATIONS
Throughout this paper, we often consider the value K(A) where A is a set. Here the program computing A need only output the elements of A (in any order). When considering K(A | B) such that A ⊆ B, it holds that K(A | B) ≤ dlog (|B| |A| ) e+O(log |B|). To see why, consider Algorithm 4. In the algorithm iA is the index of A when considering some ordering of all subsets of B of size |A|. Thus dlog (|B| |A| ) e bits are sufficient to represent iA. The remaining variables i,mA,mB and any
Algorithm 4: Compute A given B as input 1 mA ← |A| ,mB ← |B| , i← 0, iA is a target index 2 for every subset C ⊆ B s.t |C| = mA (in a predetermined order) do 3 if i = iA then Print C 4 i← i+ 1
additional variables required to construct the set C are all of size at most O(log |B|) and there is at most a constant number of them.
During our analysis, we often bound the Kolmogorov complexity of tuples of objects. For example, K(A,P | B) where A ⊆ B is a set and P : A → [|A|] is a permutation of A (note that A,P together form an ordered tuple of the elements of A). Instead of explicitly presenting a program such as Algorithm 4, we say that if K(A | B) ≤ c1 and c2 bits are sufficient to represent P , thus K(A,P | B) ≤ c1 + c2 +O(1). This just means that we directly have a variable encoding P into the program that computes A given B and uses it in the code. For example, we can add a permutation to Algorithm 4 and output an ordered tuple of elements rather than a set. Note that when representing a permutation of A, |A| = k, instead of using functions, we can just talk about values in dlog k!e. That is, we can decide on some predetermined ordering of all permutations of k elements, and represent a permutation as its number in this ordering.
A.2 OMITTED PROOFS FOR SECTION 2
Lemma A.1. For p ∈ [0, 1] it holds that h(p) ≤ p log(e/p).
Proof. Let us write our lemma as:
h(p) = −p log p− (1− p) log(1− p) ≤ p log(e/p)
Rearranging we get:
− (1− p) log(1− p) ≤ p log p+ p log(1/p) + p log e =⇒ −(1− p) log(1− p) ≤ p log e
=⇒ − ln(1− p) ≤ p (1− p)
Note that − ln(1− p) = ∫ p 0 1 (1−x)dx ≤ p · 1 (1−p) . Where in the final transition we use the fact that
1 (1−x) is monotonically increasing on [0, 1]. This completes the proof.
Lemma A.2. For p, γ, q ∈ [0, 1] where pγ ≤ q, (1− p)γ ≤ (1− q) it holds that
qh( pγ q ) + (1− q)h( (1− p)γ (1− q) ) ≤ h(γ)− γDKL(p ‖ q)
Proof. Let us expand the left hand side using the definition of entropy:
qh( pγ q ) + (1− q)h( (1− p)γ (1− q) )
= −q(pγ q log pγ q + (1− pγ q ) log(1− pγ q ))
− (1− q)( (1− p)γ (1− q) log (1− p)γ (1− q) + (1− (1− p)γ (1− q) ) log(1− (1− p)γ (1− q) ))
= −(pγ log pγ q + (q − pγ) log q − pγ q )
− ((1− p)γ log (1− p)γ (1− q) + ((1− q)− (1− p)γ) log (1− q)− (1− p)γ 1− q )
= −γ log γ − γDKL(p ‖ q)
− (q − pγ)(log q − pγ q )− ((1− q)− (1− p)γ) log (1− q)− (1− p)γ 1− q )
Where in the last equality we simply sum the first terms on both lines. To complete the proof we use the log-sum inequality for the last expression. The log-sum inequality states that: Let {ak}mk=1 , {bk} m k=1 be non-negative numbers and let a = ∑m k=1 ak, b = ∑m k=1 bk, then ∑m k=1 ai log ai bi ≥ a log ab . We apply the log-sum inequality with m = 2, a1 = q − pγ, a2 = (1 − q) − (1 − p)γ, a = 1 − γ and b1 = q, b2 = 1− q, b = 1, getting that:
(q − pγ)(log q − pγ q ) + ((1− q)− (1− p)γ) log (1− q)− (1− p)γ 1− q ) ≥ (1− γ) log(1− γ)
Putting everything together we get that
− γ log γ − γDKL(p ‖ q)
− (q − pγ)(log q − pγ q )− ((1− q)− (1− p)γ) log (1− q)− (1− p)γ 1− q )
≤ −γ log γ − (1− γ) log(1− γ)− γDKL(p ‖ q) = h(γ)− γDKL(p ‖ q)
Lemma 2.4. Let A ⊆ B, |B| = m, |A| = γm, and let g : B → {0, 1}. For any set Y ⊆ B let Y1 = {x | x ∈ Y, g(x) = 1} , Y0 = Y \ Y1 and κY = |Y1||Y | . It holds that
K(A | B, g) ≤ mγ(log(e/γ)−DKL(κB ‖ κA)) +O(logm)
Proof. The algorithm is very similar to Algorithm 4, the main difference is that we must first compute B1, B0 from B using g, and select A1, A0 from B1, B0, respectively, using two indices iA1 , iA0 . Finally we print A = A1 ∪A0. We can now bound the number of bits required to represent iA1 , iA0 . Note that |B1| = κBm, |B0| = (1− κB)m. Note that for A1 we pick γκAm elements from κBm elements and for A0 we pick γ(1− κA)m elements from (1− κB)m elements. The number of bits required to represent this selection is:
dlog ( κBm
γκAm
) e+ dlog ( (1− κB)m γ(1− κA)m ) e ≤ κBmh( γκA κB ) + (1− κB)mh( γ(1− κA) (1− κB) )
≤ m(h(γ)− γDKL(κB ‖ κA)) ≤ mγ(log(e/γ)−DKL(κB ‖ κA)) Where in the first inequality we used the fact that ∀0 ≤ k ≤ n, log ( n k ) ≤ nh(k/n), Lemma A.2 in the second transition, and Lemma A.1 in the third transition. Note that when κA = 0, 1 We only have one term of the initial sum. For example, for κA = 1 we get:
dlog ( κBm
γκAm
) e = dlog ( κBm
γm
) e ≤ κBmh( γ
κB )
≤ mγ log(eκB/γ) = mγ(log(e/γ)− log(1/κB))
And similar computation yields mγ(log(e/γ)− log(1/(1−κB))) for κA = 0. Finally, the additional O(logm) factor is due to various counters and variables, similarly to Algorithm 4.
A.3 OMITTED PROOFS FOR SECTION 3
Lemma A.3. It holds that K(r,W1,1 | X) ≤ K(Wt+1,1) +O(t) + ∑t k=1K(rk | X,Wk+1,1).
Proof. Similarly to the definition of Y in Lemma 3.1, let Y ′ be the program which receives X, ri,Wi+1,1 as input and repeatedly applies Theorem 2.2 to retrieve Wi,1. As Y ′ just needs to reconstruct all batches from X, ri and call Y for n/b times, it holds that K(Y ′) = O(log n). Using the subadditivity and extra information properties of K(), together with the fact that W1,1 can be reconstructed given X,Wt+1,1, Y ′, we write the following:
K(r | X) ≤ K(r,W1,1, Y ′,Wt+1,1 | X) +O(1) ≤ K(W1,1,,Wt+1,1, Y ′ | X) +K(r | X,Y ′,Wt+1,1) +O(1) ≤ K(Wt+1,1 | X) +K(r | X,Y ′,Wt+1,1) +O(log n)
First, we note that: ∀i ∈ [t − 1],K(ri | X,Y ′,Wi+2,1, ri+1) ≤ K(ri | X,Y ′,Wi+1,1) + O(1). Where in the last inequality we simply execute Y ′ on X,Wi+2,1, ri+1 to get Wi+1,1. Let us write:
K(r1r2 . . . rt | X,Y ′,Wt+1,1) ≤ K(rt | X,Y ′,Wt+1,1) +K(r1r2 . . . rt−1 | X,Y ′,Wt+1,1, rt) +O(1) ≤ K(rt | X,Wt+1,1) +K(r1r2 . . . rt−1 | X,Y ′,Wt,1) +O(1) ≤ K(rt | X,Wt+1,1) +K(rt−1 | X,Wt,1) +K(r1r2 . . . rt−2 | X,Y ′,Wt−1,1) +O(1)
≤ · · · ≤ O(t) + t∑
k=1
K(rk | X,Wk+1,1)
Combining everything together we get that:
K(r | X) ≤ K(Wt+1,1) +O(t) + t∑
k=1
K(rk | X,Wk+1,1)
Corollary 3.3. If ∀k, 1k ∑k i=1 ρi > β(n, b) + γ, for γ = Ω(b
−1 log n), then w.h.p SGD terminates within O(1) epochs.
Proof. Let us simplify Inequality 1.
d+ tn log(n/e)−O(log n) ≤ d+ t[n(log(n/e) + n ·O(log n) b +O(log n)]− t∑ i=1 bρi
=⇒ −O(log n) ≤ t[n ·O(log n) b +O(log n)]− t∑ i=1 bρi
=⇒ ( t∑ i=1 ρi)− tβ(n, b) ≤ O(log n)/b
Our condition implies that ∑t i=1 ρi > t(β(n, b) + γ). This allows us to rewrite the above inequality as:
tγ ≤ O(log n)/b =⇒ t = O(1)
A.4 OMITTED PROOFS FOR SECTION 4
Lemma A.4. Let X be some set of size n and let f : X → {0, 1} be a random binary function. It holds w.h.p that there exists no function g : X → {0, 1} such that K(g | X) = o(n) and g agrees with f on n(1/2 + Θ(1)) elements in X .
Proof. Let us assume that g agrees with f on all except n elements in X and bound . Using Theorem 2.3, it holds w.h.p that K(f | X) > n−O(log n). We show that if is sufficiently far from 1/2, we can use g to compress f below its Kolmogorov complexity, arriving at a contradiction.
We can construct f using g and the set of values on which they do not agree, which we denote by D. This set is of size n and therefore can be encoded using log ( n n ) ≤ nh( ) bits (recall that
∀0 ≤ k ≤ n, log ( n k ) ≤ nh(k/n)) given X (i.e., K(D | X) ≤ nh( )). To compute f(x) using D, g we simply check if x ∈ D and output g(x) or 1− g(x) accordingly. The total number of bits required for the above is K(g,D | X) ≤ o(n) + nh( ) (where auxiliary variables are subsumed in the o(n) term). We conclude that K(f | X) ≤ o(n) + nh( ). Combining the upper and lower bounds on K(f | X), it must hold that o(n) + nh( ) ≥ n−O(log n) =⇒ h( ) ≥ 1− o(1). This inequality only holds when = 1/2 + o(1).
Lemma A.5. It holds that 1− n(1−λi,j)(j−1)b ≤ λ ′ i,j ≤ nλi,j (j−1)b .
Proof. We can write the following for j ∈ [2, n/b+ 1]: nλi,j = ∑ x∈X acc(Wi,j , x) = ∑ x∈Xi,j−1 acc(Wi,j , x) + ∑ x∈X\Xi,j−1 acc(Wi,j , x)
= (j − 1)bλ′i,j + (n− (j − 1)b)λ′′i,j
=⇒ λ′i,j = nλi,j − (n− (j − 1)b)λ′′i,j
(j − 1)b
Setting λ′′i,j = 0 we get
λ′i,j = nλi,j − (n− (j − 1)b)λ′′i,j
(j − 1)b ≤ nλi,j (j − 1)b
And setting λ′′i,j = 1 we get
λ′i,j = nλi,j − (n− (j − 1)b)λ′′i,j (j − 1)b ≥ 1− n(1− λi,j) (j − 1)b
B EXPERIMENTS
Experimental setup We perform experiments on MNIST dataset and the same data set with random labels (MNIST-RAND). We use SGD with learning rate 0.01 without momentum or regularization. We use a simple fully connected architecture with a single hidden layer, GELU activation units (a differentiable alternative to ReLU) and cross entropy loss. We run experiments with a hidden layer of size 2, 5, 10. We consider batches of size 50, 100, 200. For each of the datasets we run experiments for all configurations of architecture sizes and batch sizes for 300 epochs.
Results Figure 2 and Figure 3 show the accuracy discrepancy and accuracy over epochs for all configurations for MNIST and MNIST-RAND respectively. Figure 4 and Figure 5 show for every batch size the accuracy discrepancy of all three model sizes on the same plot. All of the values displayed are averaged over epochs, i.e., the value for epoch t is 1t ∑ i xi.
First, we indeed observe that the scale of the accuracy discrepancy is inversely quadratic in the batch size, as our analysis suggests. Second, for MNIST-RAND we can clearly see that the average accuracy discrepancy tends below a certain threshold over time, where the threshold appears to be independent of the number of model parameters. We see similar results for MNIST when the model is small, but not when it is large. This is because the model does not reach its capacity within the timeframe of our experiment. | 1. What is the focus of the paper regarding dynamics of stochastic gradient descent?
2. What are the strengths of the paper, particularly in its discussion of SGD and the usage of Kolmogorov complexity?
3. What are the weaknesses of the paper, such as issues with clarity, confusion, and limitations in its results?
4. How could the paper improve its clarity and quality, especially in the presentation of its main results and flow of discussion?
5. Are there any concerns regarding the novelty of the paper's techniques compared to prior works? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The paper considers dynamics of stochastic gradient descent (SGD) and relates accuracy with a notion of "accuracy discrepancy". The paper shows that if the "accuracy discrepancy" is large enough, then SGD can find a model with perfect accuracy, while on the other hand if the "accuracy discrepancy" is small, there exists inputs for which a specific gradient descent component within an SGD algorithm terminates in exponential time.
Strengths And Weaknesses
Overall, the paper is interesting but suffers from clarity issues.
Strengths of the paper:
The paper discusses SGD, which is an important and relevant topic for the learning community.
The paper gives many examples for the introduction / motivation
The usage of Kolmogorov complexity in the analysis of SGD is interesting
Some comments on weaknesses include: a. It appears that the "reversibility" of Lemma 2.1 and other usages in the paper requires that the smoothness constant L satisfy L>1, but it was not clearly mentioned in the paper. Are there any comments on this and how it affects or puts limitations on the main result? b. There are parts in the paper that leave a confusing impression. For example, in page 2 "Outline of our techniques" it is stated that "if the accuracy on the entire dataset is 100% we terminate" but immediately at the top of page 3 the paper asks the reader to consider a scenario where "we terminate our algorithm when we achieve 90% accuracy on the entire dataset". This, being in the introduction section, may cause confusion for readers. c. Following the above point on confusion, there are also other sentences that may require revision. For example, on page 5, what does "private case of Pinsker's inequality" mean? And in the "Representing sets" section, it is said that "some useful bounds" are to be stated there, but there is only one bound Lemma 2.4., is something missing here? d. Does the result only hold for specific scenarios of SGD? For example, does the results of Section 4 only hold when we need convergence/termination in Alg. 2? What about the scenario when SGD only has one step of GD (such as similar to Algorithm 1)? This seems to show that the results are fairly limited and only suitable for specific scenarios. e. The main results and flow of the discussion is not very clear and concise. This is also shown through the fact that there are a lot of Lemmas but there is no main Theorem in Section 3 and only one main Theorem at the very end of page 9 in Section 4. The clarity of the paper would be greatly improved with more discussions on the flow of proofs and more interpretations of lemmas. f. The experiment results were not discussed in detail. g. Some comments on typos: There are several occurrences in the paper where "perturb" is spelled wrong. Another point is: Is the description of Algorithm 3 missing?
Clarity, Quality, Novelty And Reproducibility
Overall, there is room for clarity and quality. Please refer to the section above for comments on clarity and quality. In terms of originality, the paper provides some discussion of SGD and prior literature but does not provide many comparisons against prior work or prior techniques. Some additional discussion / comments in the paper regarding novelty of techniques would be helpful. |
ICLR | Title
SGD Through the Lens of Kolmogorov Complexity
Abstract
We initiate a thorough study of the dynamics of stochastic gradient descent (SGD) under minimal assumptions using the tools of entropy compression. Specifically, we characterize a quantity of interest which we refer to as the accuracy discrepancy. Roughly speaking, this measures the average discrepancy between the model accuracy on batches and large subsets of the entire dataset. We show that if this quantity is sufficiently large, then SGD finds a model which achieves perfect accuracy on the data in O(1) epochs. On the contrary, if the model cannot perfectly fit the data, this quantity must remain below a global threshold, which only depends on the size of the dataset and batch. We use the above framework to lower bound the amount of randomness required to allow (non-stochastic) gradient descent to escape from local minima using perturbations. We show that even if the model is extremely overparameterized, at least a linear (in the size of the dataset) number of random bits are required to guarantee that GD escapes local minima in subexponential time.
1 INTRODUCTION
Stochastic gradient descent (SGD) is at the heart of modern machine learning. However, we are still lacking a theoretical framework that explains its performance for general, non-convex functions. Current results make significant assumptions regarding the model. Global convergence guarantees only hold under specific architectures, activation units, and when models are extremely overparameterized (Du et al., 2019; Allen-Zhu et al., 2019; Zou et al., 2018; Zou and Gu, 2019). In this paper, we take a step back and explore what can be said about SGD under the most minimal assumptions. We only assume that the loss function is differentiable and L-smooth, the learning rate is sufficiently small and that models are initialized randomly. Clearly, we cannot prove general convergence to a global minimum under these assumptions. However, we can try and understand the dynamics of SGD - what types of execution patterns can and cannot happen.
Motivating example: Suppose hypothetically, that for every batch, the accuracy of the model after the Gradient Descent (GD) step on the batch is 100%. However, its accuracy on the set of previously seen batches (including the current batch) remains at 80%. Can this process go on forever? At first glance, this might seem like a possible scenario. However, we show that this cannot be the case. That is, if the above scenario repeats sufficiently often the model must eventually achieve 100% accuracy on the entire dataset.
To show the above, we identify a quantity of interest which we call the accuracy discrepancy (formally defined in Section 3). Roughly speaking, this is how much the model accuracy on a batch differs from the model accuracy on all previous batches in the epoch. We show that when this quantity (averaged over epochs) is higher than a certain threshold, we can guarantee that SGD convergence to 100% accuracy on the dataset within O(1) epochs w.h.p1. We note that this threshold is global, that is, it only depends on the size of the dataset and the size of the batch. In doing so, we provide a sufficient condition for SGD convergence.
The above result is especially interesting when applied to weak models that cannot achieve perfect accuracy on the data. Imagine a dataset of size n with random labels, a model with n0.99 parameters, and a batch of size log n. The above implies that the accuracy discrepancy must eventually go below
1With high probability means a probability of at least 1− 1/n, where n is the size of the dataset.
the global threshold. In other words, the model cannot consistently make significant progress on batches. This is surprising because even though the model is underparameterized with respect to the entire dataset, it is extremely overparameterized with respect to the batch. We verify this observation experimentally (Appendix B). This holds for a single GD step, but what if we were to allow many GD steps per batch, would this mean that we still cannot make significant progress on the batch? This leads us to consider the role of randomness in (non-stochastic) gradient descent.
It is well known that overparameterized models trained using SGD can perfectly fit datasets with random labels (Zhang et al., 2017). It is also known that when models are sufficiently overparameterized (and wide) GD with random initialization convergences to a near global minimum (Du et al., 2019). This leads to an interesting question: how much randomness does GD require to escape local minima efficiently (in polynomial time)? It is obvious that without randomness we could initialize GD next to a local minimum, and it will never escape it. However, what about the case where we are provided an adversarial input and we can perturb that input (for example, by adding a random vector to it), how many bits of randomness are required to guarantee that after the perturbation GD achieves good accuracy on the input in polynomial time?
In Section 4 we show that if the amount of randomness is sublinear in the size of the dataset, then for any differentiable and L-smooth model class (e.g., a neural network architecture), there are datasets that require an exponential running time to achieve any non-trivial accuracy (i.e., better than 1/2 + o(1) for a two-class classification task), even if the model is extremely overparameterized. This result highlights the importance of randomness for the convergence of gradient methods. Specifically, it provides an indication of why SGD converges in certain situations and GD does not. We hope this result opens the door to the design of randomness in other versions of GD.
Outline of our techniques We consider batch SGD, where the dataset is shuffled once at the beginning of each epoch and then divided into batches. We do not deal with the generalization abilities of the model. Thus, the dataset is always the training set. In each epoch, the algorithm goes over the batches one by one, and performs gradient descent to update the model. This is the "vanilla" version of SGD, without any acceleration or regularization (for a formal definition, see Section 2). For the sake of analysis, we add a termination condition after every GD step: if the accuracy on the entire dataset is 100% we terminate. Thus, in our case, termination implies 100% accuracy.
To achieve our results, we make use of entropy compression, first considered by Moser and Tardos (2010) to prove a constructive version of the Lovász local lemma. Roughly speaking, the entropy compression argument allows one to bound the running time of a randomized algorithm2 by leveraging the fact that a random string of bits (the randomness used by the algorithm) is computationally incompressible (has high Kolmogorov complexity) w.h.p. If one can show that throughout the execution of the algorithm, it (implicitly) compresses the randomness it uses, then one can bound the number of iterations the algorithm may execute without terminating. To show that the algorithm has such a property, one would usually consider the algorithm after executing t iterations, and would try to show that just by looking at an "execution log" of the algorithm and some set of "hints", whose size together is considerably smaller than the number of random bits used by the algorithm, it is possible to reconstruct all of the random bits used by the algorithm.
We apply this approach to SGD with an added termination condition when the accuracy over the entire dataset is 100%. Thus, termination in our case guarantees perfect accuracy. The randomness we compress is the bits required to represent the random permutation of the data at every epoch. So indeed the longer SGD executes, the more random bits are generated. We show that under our assumptions it is possible to reconstruct these bits efficiently starting from the dataset X and the model after executing t epochs. The first step in allowing us to reconstruct the random bits of the permutation in each epoch is to show that under the L-smoothness assumption and a sufficiently small step size, SGD is reversible. That is, if we are given a model Wi+1 and a batch Bi such that Wi+1 results from taking a gradient step with model Wi where the loss is calculated with respect to Bi, then we can uniquely retrieve Wi using only Bi and Wi+1. This means that if we can efficiently encode the batches used in every epoch (i.e., using less bits than encoding the entire permutation of the data), we can also retrieve all intermediate models in that epoch (at no additional cost). We prove this claim in Section 2.
2We require that the number of the random bits used is proportional to the execution time of the algorithm. That is, the algorithm flips coins for every iteration of a loop, rather than just a constant number at the beginning of the execution.
The crux of this paper is to show that when the accuracy discrepancy is high for a certain epoch, the batches can indeed be compressed. To exemplify our techniques let us consider the scenario where, in every epoch, just after a single GD step on a batch we consistently achieve perfect accuracy on the batch. Let us consider some epoch of our execution, assume we have access to X , and let Wf be the model at the end of the epoch. If the algorithm did not terminate, then Wf has accuracy at most 1− on the entire dataset (assume for simplicity that is a constant). Our goal is to retrieve the last batch of the epoch, Bf ⊂ X (without knowing the permutation of the data for the epoch). A naive approach would be to simply encode the indices in X of the elements in the batch. However, we can use Wf to achieve a more efficient encoding. Specifically, we know that Wf achieves 1.0 accuracy on Bf but only 1− accuracy on X . Thus it is sufficient to encode the elements of Bf using a smaller subset of X (the elements classified correctly by Wf , which has size at most (1− ) |X|). This allows us to significantly compress Bf . Next, we can use Bf and Wf together with the reversibility of SGD to retrieve Wf−1. We can now repeat the above argument to compress Bf−1 and so on, until we are able to reconstruct all of the random bits used to generate the permutation of X in the epoch. This will result in a linear reduction in the number of bits required for the encoding.
In our analysis, we show a generalized version of the scenario above. We show that high accuracy discrepancy implies that entropy compression occurs. For our second result, we consider a modified SGD algorithm that instead of performing a single GD step per batch, first perturbs the batch with a limited amount of randomness and then performs GD until a desired accuracy on the batch is reached. We assume towards contradiction that GD can always reach the desired accuracy on the batch in subexponential time. This forces the accuracy discrepancy to be high, which guarantees that we always find a model with good accuracy. Applying this reasoning to models of sublinear size and data with random labels we arrive at a contradiction, as such models cannot achieve good accuracy on the data. This implies that when we limit the amount of randomness GD can use for perturbations, there must exist instances where GD requires exponential running time to achieve good accuracy.
Related work There has been a long line of research proving convergence bounds for SGD under various simplifying assumptions such as: linear networks (Arora et al., 2019; 2018), shallow networks (Safran and Shamir, 2018; Du and Lee, 2018; Oymak and Soltanolkotabi, 2019), etc. However, the most general results are the ones dealing with deep, overparameterized networks (Du et al., 2019; Allen-Zhu et al., 2019; Zou et al., 2018; Zou and Gu, 2019). All of these works make use of NTK (Neural Tangent Kernel)(Jacot et al., 2018) and show global convergence guarantees for SGD when the hidden layers have width at least poly(n,L) where n is the size of the dataset and L is the depth of the network. We note that the exponents of the polynomials are quite large.
A recent line of work by Zhang et al. (2022) notes that in many real world scenarios models do not converge to stationary points. They instead take a different approach which, similar to us, studies the dynamics of neural networks. They show that under certain assumptions (e.g., considering a fully connected architecture with sub-differentiable and coordinate-wise Lipschitz activations and weights laying on a compact set) the change in training loss gradually converges to 0, even if the full gradient norms do not vanish.
In (Du et al., 2017) it was shown that GD can take exponential time to escape saddle points, even under random initialization. They provide a highly engineered instance, while our results hold for many model classes of interest. Jin et al. (2017) show that adding perturbations during the executions of GD guarantees that it escapes saddle points. This is done by occasionally perturbing the parameters within a ball of radius r, where r depends on the properties of the function to be optimized. Therefore, a single perturbation must require an amount of randomness linear in the number of parameters.
2 PRELIMINARIES
We consider the following optimization problem. We are given an input (dataset) of size n. Let us denote X = {xi}ni=1 (Our inputs contain both data and labels, we do not need to distinguish them for this work). We also associate every x ∈ X with a unique id of dlog ne bits. We often consider batches of the input B ⊂ X . The size of the batch is denoted by b (all batches have the same size). We have some model whose parameters are denoted by W ∈ Rd, where d is the model dimension. We aim to optimize a goal function of the following type: f(W ) = 1n ∑ x∈X fx(W ), where the functions fx : Rd → R are completely determined by x ∈ X . We also define for every set A ⊆ X: fA(W ) = 1 |A| ∑ x∈A fx(W ). Note that fX = f .
We denote by acc(W,A) : Rd × 2X → [0, 1] the accuracy of model W on the set A ⊆ X (where we use W to classify elements from X). Note that for x ∈ X it holds that acc(W,x) is a binary value indicating whether x is classified correctly or not. We require that every fx is differentiable and L-smooth: ∀W1,W2 ∈ Rd, ‖∇fx(W1)−∇fx(W2)‖ ≤ L‖W1 −W2‖. This implies that every fA is also differentiable and L-smooth. To see this consider the following:
‖∇fA(W1)−∇fA(W2)‖ = ‖ 1 |A| ∑ x∈A ∇fx(W1)− 1 |A| ∑ x∈A ∇fx(W2)‖
= 1 |A| ‖ ∑ x∈A ∇fx(W1)−∇fx(W2)‖ ≤ 1 |A| ∑ x∈A ‖∇fx(W1)−∇fx(W2)‖ ≤ L‖W1 −W2‖
We state another useful property of fA:
Lemma 2.1. Let W1,W2 ∈ Rd and α < 1/L. For any A ⊆ X , if it holds that W1 − α∇fA(W1) = W2 − α∇fA(W2) then W1 = W2.
Proof. Rearranging the terms we get thatW1−W2 = α∇fA(W1)−α∇fA(W2). Now let us consider the norm of both sides: ‖W1−W2‖ = ‖α∇fA(W1)−α∇fA(W2)‖ ≤ α·L‖W1−W2‖ < ‖W1−W2‖ Unless W1 = W2, the final strict inequality holds which leads to a contradiction.
The above means that for a sufficiently small gradient step, the gradient descent process is reversible. That is, we can always recover the previous model parameters given the current ones, assuming that the batch is fixed. We use the notion of reversibility throughout this paper. However, in practice we only have finite precision, thus instead of R we work with the finite set F ⊂ R. Furthermore, due to numerical stability issues, we do not have access to exact gradients, but only to approximate values ∇̂fA. For the rest of this paper, we assume these values are L-smooth on all elements in Fd. That is,
∀W1,W2 ∈ Fd, A ⊆ X, ‖∇̂fA(W1)− ∇̂fA(W2)‖ ≤ L‖W1 −W2‖
This immediately implies that Lemma 2.1 holds even when precision is limited. Let us state the following theorem:
Theorem 2.2. Let W1,W2, ...,Wk ∈ Fd ⊂ Rd, A1, A2, ..., Ak ⊆ X and α < 1/L. If it holds that Wi = Wi−1 − α∇̂fAi−1(Wi−1), then given A1, A2, ..., Ak−1 and Wk we can retrieve W1.
Proof. Given Wk we iterate over all W ∈ Fd until we find W such that Wk = W − α∇̂fAi−1(W ). Using Lemma 2.1, there is only a single element such that this equality holds, and thus W = Wk−1. We repeat this process until we retrieve W1.
SGD We analyze the classic SGD algorithm presented in Algorithm 1. One difference to note in our algorithm, compared to the standard implementation, is the termination condition when the accuracy on the dataset is 100%. In practice the termination condition is not used, however, we only use it to prove that at some point in time the accuracy of the model is 100%.
Algorithm 1: SGD 1 i← 1 // epoch counter 2 W1,1 is an initial model 3 while True do 4 Take a random permutation of X , divided into batches {Bi,j}n/bj=1 5 for j from 1 to n/b do 6 if acc(Wi,j , X) = 1 then Return Wi,j 7 Wi,j+1 ←Wi,j − α∇fBi,j (Wi,j) 8 i← i+ 1, Wi,1 ←Wi−1,n/b+1
Kolmogorov complexity The Kolmogorov complexity of a string x ∈ {0, 1}∗, denoted by K(x), is defined as the size of the smallest prefix Turing machine which outputs this string. We note that this definition depends on which encoding of Turing machines we use. However, one can show that this will only change the Kolmogorov complexity by a constant factor (Li and Vitányi, 2019).
We also use the notion of conditional Kolmogorov complexity, denoted by K(x | y). This is the length of the shortest prefix Turing machine which gets y as an auxiliary input and prints x. Note that the length of y does not count towards the size of the machine which outputs x. So it can be the case that |x| |y| but it holds that K(x | y) < K(x). We can also consider the Kolmogorov complexity of functions. Let g : {0, 1}∗ → {0, 1}∗ then K(g) is the size of the smallest Turing machine which computes the function g.
The following properties of Kolmogorov complexity will be of use. Let x, y, z be three strings:
• Extra information: K(x | y, z) ≤ K(x | z) +O(1) ≤ K(x, y | z) +O(1) • Subadditivity: K(xy | z) ≤ K(x | z, y)+K(y | z)+O(1) ≤ K(x | z)+K(y | z)+O(1)
Random strings have the following useful property (Li and Vitányi, 2019): Theorem 2.3. For an n bit string x chosen uniformly at random, and some string y independent of x (i.e., y is fixed before x is chosen) and any c ∈ N it holds that Pr[K(x | y) ≥ n− c] ≥ 1− 1/2c.
Entropy and KL-divergence Our proofs make extensive use of binary entropy and KL-divergence. In what follows we define these concepts and provide some useful properties.
Entropy: For p ∈ [0, 1] we denote by h(p) = −p log p− (1− p) log(1− p) the entropy of p. Note that h(0) = h(1) = 0.
KL-divergence: For p, q ∈ (0, 1) let DKL(p ‖ q) = p log pq + (1 − p) log 1−p 1−q be the Kullback Leibler divergence (KL-divergence) between two Bernoulli distributions with parameters p, q. We also extend the above for the case where q, p ∈ {0, 1} as follows: DKL(1 ‖ q) = DKL(0 ‖ q) = 0, DKL(p ‖ 1) = log(1/p), DKL(p ‖ 0) = log(1/(1 − p)). This is just notation that agrees with Lemma 2.4. We also state the following result of Pinsker’s inequality applied to Bernoulli random variables: DKL(p ‖ q) ≥ 2(p− q)2. Representing sets Let us state some useful bounds on the Kolmogorov complexity of sets. A more detailed explanation regarding the Kolmogorov complexity of sets and permutations together with the proof to the lemma below appears in Appendix A. Lemma 2.4. Let A ⊆ B, |B| = m, |A| = γm, and let g : B → {0, 1}. For any set Y ⊆ B let Y1 = {x | x ∈ Y, g(x) = 1} , Y0 = Y \ Y1 and κY = |Y1||Y | . It holds that
K(A | B, g) ≤ mγ(log(e/γ)−DKL(κB ‖ κA)) +O(logm)
3 ACCURACY DISCREPANCY
First, let us define some useful notation (Wi,j , Bi,j are formally defined in Algorithm 1):
• λi,j = acc(Wi,j , X). This is the accuracy of the model in epoch i on the entire dataset X , before performing the GD step on batch j.
• ϕi,j = acc(Wi,j , Bi,j−1). This is the accuracy of the model on the (j − 1)-th batch in the i-th epoch after performing the GD step on the batch.
• Xi,j = ⋃j k=1Bi,k (note that ∀i,Xi,0 = ∅, Xi,n/b = X). This is the set of elements in the first j
batches of epoch i. Let us also denote nj = |Xi,j | = jb (Note that ∀j, i1, i2, |Xi1,j | = |Xi2,j |, thus i need not appear in the subscript).
• λ′i,j = acc(Wi,j , Xi,j−1), λ ′′ i,j = acc(Wi,j , X \Xi,j−1), where λ′i,j is the accuracy of the model
on the set of all previously seen batch elements, after performing the GD step on the (j − 1)-th batch and λ′′i,j is the accuracy of the same model, on all remaining elements (j-th batch onward). To avoid computing the accuracy on empty sets, λ′i,j is defined for j ∈ [2, n/b + 1] and λ′′i,j is defined for j ∈ [1, n/b]. • ρi,j = DKL(λ′i,j ‖ ϕi,j) is the accuracy discrepancy for the j-th batch in iteration i and ρi =∑n/b+1 j=2 ρi,j is the accuracy discrepancy at iteration i.
In our analysis, we consider t epochs of the SGD algorithm. Our goal for this section is to derive a connection between ∑t i=1 ρi and t. Bounding t: Our goal is to use the entropy compression argument to show that if ∑t i=1 ρi is sufficiently large we can bound t. Let us start by formally defining the random bits which the algorithm uses. Let ri be the string of random bits representing the random permutation of X at epoch i. As we consider t epochs, let r = r1r2 . . . rt.
Note that the number of bits required to represent an arbitrary permutation of [n] is given by: dlog(n!)e = n log n− n log e+O(log n) = n log(n/e) +O(log n),
where in the above we used Stirling’s approximation. Thus, it holds that |r| = t(n log(n/e) + O(log n)) and according to Theorem 2.3, with probability at least 1 − 1/n2 it holds that K(r) ≥ tn log(n/e)−O(log n). In the following lemma we show how to use the model at every iteration to efficiently reconstruct the batch at that iteration, where the efficiency of reconstruction is expressed via ρi. Lemma 3.1. It holds w.h.p that ∀i ∈ [t] that: K(ri |Wi+1,1, X) ≤ n log ne − bρi + n b ·O(log n)
Proof. Recall that Bi,j is the j-th batch in the i-th epoch, and let Pi,j be a permutation of Bi,j such that the order of the elements in Bi,j under Pi,j is the same as under ri. Note that given X , if we know the partition into batches and all permutations, we can reconstruct ri. According to Theorem 2.2, given Wi,j and Bi,j−1 we can compute Wi,j−1. Let us denote by Y the encoding of this procedure. To implement Y we need to iterate over all possible vectors in Fd and over batch elements to compute the gradients. To express this program we require auxiliary variables of size at most O(log min {d, b}) = O(log n). Thus it holds that K(Y ) = O(log n). Let us abbreviate Bi,1, Bi,2, ..., Bi,j as (Bi,k) j k=1. We write the following. K(ri | X,Wi+1,1) ≤ K(ri, Y | X,Wi+1,1) +O(1) ≤ K(ri | X,Wi+1,1, Y ) +K(Y | X,Wi+1,1) +O(1)
≤ O(log n) +K((Bi,k, Pi,k)n/bk=1 | X,Wi+1,1, Y )
≤ O(log n) +K((Bi,k)n/bk=1 | X,Wi+1,1, Y ) +K((Pi,k) n/b k=1 | X,Wi+1,1, Y )
≤ O(log n) +K((Bi,k)n/bk=1 | X,Wi+1,1, Y ) + n/b∑ j=1 K(Pi,j)
Let us bound K((Bi,k) n/b k=1 | X,Wi+1,1, Y ) by repeatedly using the subadditivity and extra information properties of Kolmogorov complexity.
K((Bi,k) n/b k=1 | X,Y,Wi+1,1) ≤ K(Bi,n/b | X,Wi+1,1) +K((Bi,k) n/b−1 k=1 | X,Y,Wi+1,1, Bi,n/b) +O(1)
≤ K(Bi,n/b | X,Wi+1,1) +K((Bi,k) n/b−1 k=1 | X,Y,Wi,n/b, Bi,n/b) +O(1) ≤ K(Bi,n/b | X,Wi+1,1) +K(Bi,n/b−1 | X,Wi,n/b, Bi,n/b)
+K((Bi,k) n/b−2 k=1 | X,Y,Wi,n/b−1, Bi,n/b, Bi,n/b−1) +O(1)
≤ ... ≤ O(n b ) + n/b∑ j=1 K(Bi,j | X,Wi,j+1, (Bi,k)n/bk=j+1) ≤ O( n b ) + n/b∑ j=1 K(Bi,j | Xi,j ,Wi,j+1)
where in the transitions we used the fact that given Wi,j , Bi,j−1 and Y we can retrieve Wi,j−1. That is, we can always bound K(... | Y,Wi,j , Bi,j−1, ...) by K(... | Y,Wi,j−1, Bi,j−1, ...) +O(1). To encode the order Pi,j inside each batch, b log(b/e) +O(log b) bits are sufficient. Finally we get that: K(ri | X,Wi+1,1) ≤ O(nb ) + ∑n/b j=1[K(Bi,j | Xi,j ,Wi,j+1) + b log(b/e) +O(log b)].
Let us now boundK(Bi,j−1 | Xi,j−1,Wi,j). KnowingXi,j−1 we know thatBi,j−1 ⊆ Xi,j−1. Thus we need to use Wi,j to compress Bi,j−1. Applying Lemma 2.4 with parameters A = Bi,j−1, B = Xi,j−1, γ = b/nj−1, κA = ϕi,j , κB = λ ′ i,j and g(x) = acc(Wi,j , x). We get the following:
K(Bi,j−1 | Xi,j−1,Wi,j) ≤ b(log( e · nj−1
b )− ρi,j) +O(log nj−1)
Adding b log(b/e) +O(log b) to the above, we get the following bound on every element in the sum: b(log( e · nj−1
b )− ρi,j) + b log(b/e) +O(log b) +O(log nj−1) ≤ b log nj−1 − bρi,j +O(log nj−1)
Note that the most important term in the sum is −bρi,j . That is, the more the accuracy of Wi,j on the batch, Bi,j−1, differs from the accuracy of Wi,j on the set of elements containing the batch, Xi,j−1, we can represent the batch more efficiently. Let us now bound the sum: ∑n/b+1 j=2 [b log nj−1 − bρi,j + O(log nj−1)]. Let us first bound the sum over b log nj−1: n/b+1∑ j=2 b log nj−1 = n/b∑ j=1 b log jb = n/b∑ j=1 b(log b+ log j)
= n log b+ b log(n/b)! = n log b+ n log n
b · e +O(log n) = n log
n e +O(log n)
Finally, we can write that:
K(ri | X,Wi+1,1) ≤ O( n
b ) + n/b+1∑ j=2 [b log nj−1 − bρi,j +O(log n)] ≤ n log n e − bρi + n b ·O(log n)
Using the above we know that when the value ρi is sufficiently high, the random permutation of the epoch can be compressed. We use the fact that random strings are incompressible to bound 1 t ∑t i=1 ρi. Theorem 3.2. If the algorithm does not terminate by the t-th iteration, then it holds w.h.p that ∀t, 1t ∑t i=1 ρi ≤ O( n logn b2 ).
Proof. Using arguments similar to Lemma 3.1, we can show that K(r,W1,1 | X) ≤ K(Wt+1,1) + O(t)+ ∑t k=1K(rk | X,Wk+1,1) (formally proved in Lemma A.3). Combining this with Lemma 3.1, we get that K(r,W1,1 | X) ≤ K(Wt+1,1) + t[n(log(n/e) + n·O(logn)b − bρi +O(log n)]. Our proof implies that we can reconstruct not only r, but also W1,1 using X,Wt+1,1. Due to the incompressibility of random strings, we get that w.h.pK(r,W1,1 | X) ≥ d+tn log(n/e)−O(log n). Combining the lower and upper bound for K(r,W1,1 | X) we can get the following inequality:
d+ tn log(n/e)−O(log n) ≤ d+ t[n(log(n/e) + n ·O(log n) b +O(log n)]− t∑ i=1 bρi (1)
=⇒ 1 t t∑ i=1 ρi ≤ n ·O(log n) b2 + O(log n)
b︸ ︷︷ ︸ β(n,b)
+ O(log n)
bt = O(
n log n
b2 )
Let β(n, b) be the exact value of the asymptotic expression in Inequality 1. Theorem 3.2 says that as long as SGD does not terminate the average accuracy discrepeancy cannot be too high. Using the contra-positive we get the following useful corollary (proof is deferred to Appendix A.3).
Corollary 3.3. If ∀k, 1k ∑k i=1 ρi > β(n, b) + γ, for γ = Ω(b
−1 log n), then w.h.p SGD terminates within O(1) epochs.
The case for weak models Using the above we can also derive some interesting negative results when the model is not expressive enough to get perfect accuracy on the data. It must be the case that the average accuracy discrepancy tends below β(n, b) over time. We verify this experimentally on the MNIST dataset (Appendix B), showing that the average accuracy indeed drops over time when the model is weak compared to the dataset. We also confirm that the dependence of the threshold in b is indeed inversely quadratic.
4 THE ROLE OF RANDOMNESS IN GD INITIALIZATION
Our goal for this section is to show that when the amount of randomness in the perturbation is too small, for any model architecture which is differentiable and L-smooth there are inputs for which Algorithm 2 requires exponential time to terminate, even for extremely overparameterized models.
Perturbation families Let us consider a family of 2` functions indexed by length ` real valued vectors Ψ` = {ψz}z∈R` . Recall that throughout this paper we assume finite precision, thus every z can be represented using O(`) bits. We say that Ψ` is a reversible perturbation family if it holds that ∀z ∈ R`, ψz is one-to-one. We often use the notation Ψ`(W ), which means pick z ∈ R` uniformly at random, and apply ψz(W ). We often refer to Ψ` as simply a perturbation.
We note that the above captures a wide range of natural perturbations. For example ψz(W ) = W+Wz where Wz[i] = z[i mod `]. Clearly ψz(W ) is reversible.
Gradient descent The GD algorithm we analyze is formally given in Algorithm 2.
Algorithm 2: GD(W,Y, δ) Input: initial model W , dataset Y , desired accuracy δ 1 i = 1, T = o(2m) + poly(d) 2 W = Ψ`(W ) 3 while acc(W,Y ) < δ and i < T do 4 W ←W − α∇fY (W ) 5 i← i+ 1 6 Return W
Let us denote by m the number of elements in Y . We make the following 2 assumptions for the rest of this section: (1) ` = o(m). (2) There exists T = o(2m) + poly(d) and a perturbation family Ψ` such that for every input W,Y within T iterations GD terminates and returns a solution that has at least δ accuracy on Y with constant probability. We show that the above two assumptions cannot hold together. That is, if the amount of randomness is sublinear in m, there must be instances with exponential running time, even when d m. To show the above, we define a variant of SGD, which uses GD as a sub procedure (Algorithm 3). Assume that our data set is a binary classification task (it is easy to generalize our results to any number of classes), and that elements in X are assigned random labels. Furthermore, let us assume that d = o(n), e.g., d = n0.99. It holds that w.h.p we cannot train a model with d parameters that achieves any accuracy better than 1/2 + o(1) on X (Lemma A.4). Let us take to be a small constant. We show that if assumptions 1 and 2 hold, then Algorithm 3 must terminate and return a model with 1/2 + Θ(1) accuracy on X , leading to a contradiction. Our analysis follows the same line as the previous section, and uses the same notation.
Reversibility First, we must show that Algorithm 3 is still reversible. Note that we can take the same approach as before, where the only difference is that in order to get Wi,j from Wi,j+1 we must now get all the intermediate values from the call to GD. As the GD steps are applied to the same batch, this amounts to applying Lemma 2.1 several times instead of once per iteration. More specifically, we must encode for every batch a number Ti,j = o(2b) + poly(d) = o(2b) + poly(n) (recall that d = o(n)) and apply Lemma 2.1 Ti,j times.
This results in ψz(Wi,j). If we know z,Ψ` then we can retrieve ψz and efficiently retrieve Wi,j using only O(log d) = O(log n) additional bits (by iterating over all values in Fd). Therefore, in every
Algorithm 3: SGD’ 1 i← 1 // epoch counter 2 W1,1 is an initial model 3 while True do 4 Take a random permutation of X , divided into batches {Bi,j}n/bj=1 5 for j from 1 to n/b do 6 if acc(Wi,j , X) ≥ 1/2(1− ) then Return Wi,j 7 Wi,j+1 ← GD(Wi,j , Bi,j , 12(1−2 ) ) 8 i← i+ 1, Wi,1 ←Wi−1,n/b+1
iteration we have the following additional terms: log T +O(log n) + ` = o(b) +O(log n). Summing over n/b iterations we get o(n) per epoch. We state the following Lemma analogous to Lemma 3.1. Lemma 4.1. For Algorithm 3 it holds w.h.p that ∀i ∈ [t] that: K(ri |Wi+1,1, X,Ψ`) ≤ n log ne − bρi + β(n, b) + o(n).
We show that under our assumptions, Algorithm 3 must terminate, leading to a contradiction. Lemma 4.2. Algorithm 3 with b = Ω(log n) terminates within O(T ) iterations w.h.p.
Proof. Our goal is to lower bound ρi = ∑n/b+1 j=2 DKL(λ ′ i,j ‖ ϕi,j). Let us first upper bound λ′i,j . Using the fact that λ′i,j ≤ nλi,j (j−1)b (Lemma A.5) combined with the fact that λi,j ≤ 1/2(1− ) as long as the algorithm does not terminate, we get that ∀j ∈ [2, n/b+ 1] it holds that λ′i,j ≤ n2(1− )(j−1)b . Using the above we conclude that as long as we do not terminate it must hold that λ′i,j ≤ 12(1− )2 whenever j ∈ I = [(1− )n/b+ 1, n/b+ 1]. That is, λ′i,j must be close to λi,j towards the end of the epoch, and therefore must be sufficiently small. Note that |I| ≥ n/b. We know that as long as the algorithm does not terminate it holds that ϕi,j > 1/2(1− 2 ) with some constant probability. Furthermore, this probability is taken over the randomness used in the call to GD (the randomness of the perturbation). This fact allows us to use Hoeffding-type bounds for the ϕi,j variables. If ϕi,j > 1/2(1− 2 ) we say that it is good. Therefore in expectation a constant fraction of ϕi,j , j ∈ I are good. Applying a Hoeffding type bound we get that w.h.p a constant fraction of ϕi,j , j ∈ I are good. Denote these good indices by Ig ⊆ I . We are now ready to bound ρi.
ρi = n/b+1∑ j=2 DKL(λ ′ i,j ‖ ϕi,j) ≥ ∑ j∈Ig DKL(λ ′ i,j ‖ ϕi,j) ≥ ∑ j∈Ig DKL( 1 2(1− )2 ‖ 1 2(1− 2 ) )
≥ Θ(n b ) · ( 1 2(1− 2 ) − 1 2(1− )2 )2 = Θ( n b ) · 5 = Θ(n b )
Where in the transitions we used the fact that KL-divergence is non-negative, and Pinsker’s inequality. Finally, requiring that b = Ω(log n) we get that bρi−β(n, b)− o(n) = Θ(n)−Θ(n lognlog2 n )− o(n) = Θ(n). Following the same calculation as in Corollary 3.3, this guarantees termination within O( lognn ) epochs, or O(T · nb · logn n ) = O(T ) iterations (gradient descent steps).
The above leads to a contradiction. It is critical to note that the above does not hold if T = 2m = 2b or if ` = Θ(n), as both would imply that the o(n) term becomes Θ(n). We state our main theorem: Theorem 4.3. For any differentiable and L-smooth model class with d parameters and a perturbation class Ψ` such that ` = o(m) there exist an input data set Y of size m such that GD requires Ω(2m) iterations to achieve δ accuracy on Y , even if δ = 1/2 + Θ(1) and d m.
A OMITTED PROOFS AND EXPLENATIONS
A.1 REPRESENTING SETS AND PERMUTATIONS
Throughout this paper, we often consider the value K(A) where A is a set. Here the program computing A need only output the elements of A (in any order). When considering K(A | B) such that A ⊆ B, it holds that K(A | B) ≤ dlog (|B| |A| ) e+O(log |B|). To see why, consider Algorithm 4. In the algorithm iA is the index of A when considering some ordering of all subsets of B of size |A|. Thus dlog (|B| |A| ) e bits are sufficient to represent iA. The remaining variables i,mA,mB and any
Algorithm 4: Compute A given B as input 1 mA ← |A| ,mB ← |B| , i← 0, iA is a target index 2 for every subset C ⊆ B s.t |C| = mA (in a predetermined order) do 3 if i = iA then Print C 4 i← i+ 1
additional variables required to construct the set C are all of size at most O(log |B|) and there is at most a constant number of them.
During our analysis, we often bound the Kolmogorov complexity of tuples of objects. For example, K(A,P | B) where A ⊆ B is a set and P : A → [|A|] is a permutation of A (note that A,P together form an ordered tuple of the elements of A). Instead of explicitly presenting a program such as Algorithm 4, we say that if K(A | B) ≤ c1 and c2 bits are sufficient to represent P , thus K(A,P | B) ≤ c1 + c2 +O(1). This just means that we directly have a variable encoding P into the program that computes A given B and uses it in the code. For example, we can add a permutation to Algorithm 4 and output an ordered tuple of elements rather than a set. Note that when representing a permutation of A, |A| = k, instead of using functions, we can just talk about values in dlog k!e. That is, we can decide on some predetermined ordering of all permutations of k elements, and represent a permutation as its number in this ordering.
A.2 OMITTED PROOFS FOR SECTION 2
Lemma A.1. For p ∈ [0, 1] it holds that h(p) ≤ p log(e/p).
Proof. Let us write our lemma as:
h(p) = −p log p− (1− p) log(1− p) ≤ p log(e/p)
Rearranging we get:
− (1− p) log(1− p) ≤ p log p+ p log(1/p) + p log e =⇒ −(1− p) log(1− p) ≤ p log e
=⇒ − ln(1− p) ≤ p (1− p)
Note that − ln(1− p) = ∫ p 0 1 (1−x)dx ≤ p · 1 (1−p) . Where in the final transition we use the fact that
1 (1−x) is monotonically increasing on [0, 1]. This completes the proof.
Lemma A.2. For p, γ, q ∈ [0, 1] where pγ ≤ q, (1− p)γ ≤ (1− q) it holds that
qh( pγ q ) + (1− q)h( (1− p)γ (1− q) ) ≤ h(γ)− γDKL(p ‖ q)
Proof. Let us expand the left hand side using the definition of entropy:
qh( pγ q ) + (1− q)h( (1− p)γ (1− q) )
= −q(pγ q log pγ q + (1− pγ q ) log(1− pγ q ))
− (1− q)( (1− p)γ (1− q) log (1− p)γ (1− q) + (1− (1− p)γ (1− q) ) log(1− (1− p)γ (1− q) ))
= −(pγ log pγ q + (q − pγ) log q − pγ q )
− ((1− p)γ log (1− p)γ (1− q) + ((1− q)− (1− p)γ) log (1− q)− (1− p)γ 1− q )
= −γ log γ − γDKL(p ‖ q)
− (q − pγ)(log q − pγ q )− ((1− q)− (1− p)γ) log (1− q)− (1− p)γ 1− q )
Where in the last equality we simply sum the first terms on both lines. To complete the proof we use the log-sum inequality for the last expression. The log-sum inequality states that: Let {ak}mk=1 , {bk} m k=1 be non-negative numbers and let a = ∑m k=1 ak, b = ∑m k=1 bk, then ∑m k=1 ai log ai bi ≥ a log ab . We apply the log-sum inequality with m = 2, a1 = q − pγ, a2 = (1 − q) − (1 − p)γ, a = 1 − γ and b1 = q, b2 = 1− q, b = 1, getting that:
(q − pγ)(log q − pγ q ) + ((1− q)− (1− p)γ) log (1− q)− (1− p)γ 1− q ) ≥ (1− γ) log(1− γ)
Putting everything together we get that
− γ log γ − γDKL(p ‖ q)
− (q − pγ)(log q − pγ q )− ((1− q)− (1− p)γ) log (1− q)− (1− p)γ 1− q )
≤ −γ log γ − (1− γ) log(1− γ)− γDKL(p ‖ q) = h(γ)− γDKL(p ‖ q)
Lemma 2.4. Let A ⊆ B, |B| = m, |A| = γm, and let g : B → {0, 1}. For any set Y ⊆ B let Y1 = {x | x ∈ Y, g(x) = 1} , Y0 = Y \ Y1 and κY = |Y1||Y | . It holds that
K(A | B, g) ≤ mγ(log(e/γ)−DKL(κB ‖ κA)) +O(logm)
Proof. The algorithm is very similar to Algorithm 4, the main difference is that we must first compute B1, B0 from B using g, and select A1, A0 from B1, B0, respectively, using two indices iA1 , iA0 . Finally we print A = A1 ∪A0. We can now bound the number of bits required to represent iA1 , iA0 . Note that |B1| = κBm, |B0| = (1− κB)m. Note that for A1 we pick γκAm elements from κBm elements and for A0 we pick γ(1− κA)m elements from (1− κB)m elements. The number of bits required to represent this selection is:
dlog ( κBm
γκAm
) e+ dlog ( (1− κB)m γ(1− κA)m ) e ≤ κBmh( γκA κB ) + (1− κB)mh( γ(1− κA) (1− κB) )
≤ m(h(γ)− γDKL(κB ‖ κA)) ≤ mγ(log(e/γ)−DKL(κB ‖ κA)) Where in the first inequality we used the fact that ∀0 ≤ k ≤ n, log ( n k ) ≤ nh(k/n), Lemma A.2 in the second transition, and Lemma A.1 in the third transition. Note that when κA = 0, 1 We only have one term of the initial sum. For example, for κA = 1 we get:
dlog ( κBm
γκAm
) e = dlog ( κBm
γm
) e ≤ κBmh( γ
κB )
≤ mγ log(eκB/γ) = mγ(log(e/γ)− log(1/κB))
And similar computation yields mγ(log(e/γ)− log(1/(1−κB))) for κA = 0. Finally, the additional O(logm) factor is due to various counters and variables, similarly to Algorithm 4.
A.3 OMITTED PROOFS FOR SECTION 3
Lemma A.3. It holds that K(r,W1,1 | X) ≤ K(Wt+1,1) +O(t) + ∑t k=1K(rk | X,Wk+1,1).
Proof. Similarly to the definition of Y in Lemma 3.1, let Y ′ be the program which receives X, ri,Wi+1,1 as input and repeatedly applies Theorem 2.2 to retrieve Wi,1. As Y ′ just needs to reconstruct all batches from X, ri and call Y for n/b times, it holds that K(Y ′) = O(log n). Using the subadditivity and extra information properties of K(), together with the fact that W1,1 can be reconstructed given X,Wt+1,1, Y ′, we write the following:
K(r | X) ≤ K(r,W1,1, Y ′,Wt+1,1 | X) +O(1) ≤ K(W1,1,,Wt+1,1, Y ′ | X) +K(r | X,Y ′,Wt+1,1) +O(1) ≤ K(Wt+1,1 | X) +K(r | X,Y ′,Wt+1,1) +O(log n)
First, we note that: ∀i ∈ [t − 1],K(ri | X,Y ′,Wi+2,1, ri+1) ≤ K(ri | X,Y ′,Wi+1,1) + O(1). Where in the last inequality we simply execute Y ′ on X,Wi+2,1, ri+1 to get Wi+1,1. Let us write:
K(r1r2 . . . rt | X,Y ′,Wt+1,1) ≤ K(rt | X,Y ′,Wt+1,1) +K(r1r2 . . . rt−1 | X,Y ′,Wt+1,1, rt) +O(1) ≤ K(rt | X,Wt+1,1) +K(r1r2 . . . rt−1 | X,Y ′,Wt,1) +O(1) ≤ K(rt | X,Wt+1,1) +K(rt−1 | X,Wt,1) +K(r1r2 . . . rt−2 | X,Y ′,Wt−1,1) +O(1)
≤ · · · ≤ O(t) + t∑
k=1
K(rk | X,Wk+1,1)
Combining everything together we get that:
K(r | X) ≤ K(Wt+1,1) +O(t) + t∑
k=1
K(rk | X,Wk+1,1)
Corollary 3.3. If ∀k, 1k ∑k i=1 ρi > β(n, b) + γ, for γ = Ω(b
−1 log n), then w.h.p SGD terminates within O(1) epochs.
Proof. Let us simplify Inequality 1.
d+ tn log(n/e)−O(log n) ≤ d+ t[n(log(n/e) + n ·O(log n) b +O(log n)]− t∑ i=1 bρi
=⇒ −O(log n) ≤ t[n ·O(log n) b +O(log n)]− t∑ i=1 bρi
=⇒ ( t∑ i=1 ρi)− tβ(n, b) ≤ O(log n)/b
Our condition implies that ∑t i=1 ρi > t(β(n, b) + γ). This allows us to rewrite the above inequality as:
tγ ≤ O(log n)/b =⇒ t = O(1)
A.4 OMITTED PROOFS FOR SECTION 4
Lemma A.4. Let X be some set of size n and let f : X → {0, 1} be a random binary function. It holds w.h.p that there exists no function g : X → {0, 1} such that K(g | X) = o(n) and g agrees with f on n(1/2 + Θ(1)) elements in X .
Proof. Let us assume that g agrees with f on all except n elements in X and bound . Using Theorem 2.3, it holds w.h.p that K(f | X) > n−O(log n). We show that if is sufficiently far from 1/2, we can use g to compress f below its Kolmogorov complexity, arriving at a contradiction.
We can construct f using g and the set of values on which they do not agree, which we denote by D. This set is of size n and therefore can be encoded using log ( n n ) ≤ nh( ) bits (recall that
∀0 ≤ k ≤ n, log ( n k ) ≤ nh(k/n)) given X (i.e., K(D | X) ≤ nh( )). To compute f(x) using D, g we simply check if x ∈ D and output g(x) or 1− g(x) accordingly. The total number of bits required for the above is K(g,D | X) ≤ o(n) + nh( ) (where auxiliary variables are subsumed in the o(n) term). We conclude that K(f | X) ≤ o(n) + nh( ). Combining the upper and lower bounds on K(f | X), it must hold that o(n) + nh( ) ≥ n−O(log n) =⇒ h( ) ≥ 1− o(1). This inequality only holds when = 1/2 + o(1).
Lemma A.5. It holds that 1− n(1−λi,j)(j−1)b ≤ λ ′ i,j ≤ nλi,j (j−1)b .
Proof. We can write the following for j ∈ [2, n/b+ 1]: nλi,j = ∑ x∈X acc(Wi,j , x) = ∑ x∈Xi,j−1 acc(Wi,j , x) + ∑ x∈X\Xi,j−1 acc(Wi,j , x)
= (j − 1)bλ′i,j + (n− (j − 1)b)λ′′i,j
=⇒ λ′i,j = nλi,j − (n− (j − 1)b)λ′′i,j
(j − 1)b
Setting λ′′i,j = 0 we get
λ′i,j = nλi,j − (n− (j − 1)b)λ′′i,j
(j − 1)b ≤ nλi,j (j − 1)b
And setting λ′′i,j = 1 we get
λ′i,j = nλi,j − (n− (j − 1)b)λ′′i,j (j − 1)b ≥ 1− n(1− λi,j) (j − 1)b
B EXPERIMENTS
Experimental setup We perform experiments on MNIST dataset and the same data set with random labels (MNIST-RAND). We use SGD with learning rate 0.01 without momentum or regularization. We use a simple fully connected architecture with a single hidden layer, GELU activation units (a differentiable alternative to ReLU) and cross entropy loss. We run experiments with a hidden layer of size 2, 5, 10. We consider batches of size 50, 100, 200. For each of the datasets we run experiments for all configurations of architecture sizes and batch sizes for 300 epochs.
Results Figure 2 and Figure 3 show the accuracy discrepancy and accuracy over epochs for all configurations for MNIST and MNIST-RAND respectively. Figure 4 and Figure 5 show for every batch size the accuracy discrepancy of all three model sizes on the same plot. All of the values displayed are averaged over epochs, i.e., the value for epoch t is 1t ∑ i xi.
First, we indeed observe that the scale of the accuracy discrepancy is inversely quadratic in the batch size, as our analysis suggests. Second, for MNIST-RAND we can clearly see that the average accuracy discrepancy tends below a certain threshold over time, where the threshold appears to be independent of the number of model parameters. We see similar results for MNIST when the model is small, but not when it is large. This is because the model does not reach its capacity within the timeframe of our experiment. | 1. What is the focus and contribution of the paper on studying SGD convergence using tools from Kolmogorov Complexity?
2. What are the strengths and weaknesses of the proposed approach, particularly in terms of its setting and motivation?
3. Do you have any concerns regarding the assumption of small step size in the argument, and how does it relate to modern neural networks' training with large step sizes?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any parts of the paper that could be better written or require further explanation? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The paper studies the convergence property of SGD using tools from Kolmogorov Complexity. A key quantity considered here is the accuracy discrepancy---the gap between the model accuracy on a training batch vs the rest of the data set.
The paper provides two results.
Section 3 shows that during training if the accuracy discrepancy is consistently beyond a global threshold
β
(
n
,
b
)
, depending only on batch size
b
and data size
n
, then training will converge to full accuracy in constant epochs (namely, after constant cycles over the dataset).
Section 4 considers randomly perturbing the inputs to allow SGD to terminate with high accuracy. It's shown that SGD may take exponential time unless the amount of randomness used in the perturbations is sufficiently high.
Strengths And Weaknesses
The paper provides some interesting ideas for studying SGD convergence. To the best of my knowledge, the argument in Section 3 is novel. I think the result is intuitive. That is, if any model training can consistently improve the accuracy on the current batch during SGD, relative to the full dataset, then it is capable of fitting all data. I appreciate the technical part where this idea is formalized.
While the theoretical claim of section 3 is interesting, I have concerns about the setting and motivation. First, the argument hinges upon the assumption that the SGD/GD is run with a sufficiently small step size below
1
/
L
. I am not convinced that this holds in practice. In particular, modern neural networks are typically trained with large step sizes at a initial stage, higher than what's suggested by optimization theory, then later annealed as training loss improves. See the literature on "learning rate decay" and "learning rate schedule". This may break the reversibility lemma in section 2 and affect the main claim of section 3.
For section 4, as the author(s) suggested, one motivation is to study whether SGD can escape local minima. It's unclear this is an interesting question (either in theory or practice). In practice, large networks are highly parametrized and capable of fitting the entire train set. And it's observed that standard optimization techniques (SGD and its variants, such as Adam) can converge. The claim of section 4 seems to be that we need to perturb the input points to avoid getting stuck at a highly sub-optimal local min (with 1/2+
ϵ
accuracy). I encourage the author(s) to expand on the conceptual message here, since as far as I know, this is not required in practical settings.
The main result in sec 3 applies to reshuffling SGD where each epoch uses a fresh random permutation. It's also common in practice that there's a single random permutation (sampled upfront) used by all epochs. (This is called single shuffle SGD; see e.g. Can Single-Shuffle SGD be Better than Reshuffling SGD and GD? by Chulhee Yun, Suvrit Sra, Ali Jadbabaie.) Could the author(s) comment on this setting?
Some parts of the paper can be better written. In the introduction (especially in the “outline of our techniques” section), it should be clarified what are the random bits that the reversal procedure tries to reconstruct. From early discussion (top of page 2), there are random bits in the initialization of the network, random bits for drawing SGD samples per step, and random perturbations added to the input points. The author(s) should specify what the main target here is.
Finally, the experiments appear somewhat limited, conducted only on shallow neural networks trained with MNIST dataset.
Minor comments
Page 1: “For every batch, the accuracy of the model after the Gradient Descent (GD) step on the batch is 100%.” When I read it first, it wasn’t clear if this is a claim or a thought experiment. Maybe add: “Suppose hypothetically.”
Page 1: “, however” -> “. However”
Page 2: “perturbe” -> “perturb”
Page 2: The precise termination condition seems confusing here. In one paragraph, it is written that “thus, in our case, termination implies 100% accuracy”. In a later paragraph: “We apply this approach to SGD with an added termination condition when the accuracy over the entire dataset is beyond some threshold. Thus, termination in our case guarantees good accuracy”. Which one is actually applied here?
Page 2: “ a sufficiently small GD step” -> “sufficiently small step size”
Page 3: “we can sort X by IDs”--- I am not sure this is necessary to say?
Page 3: “pertrubes” -> “perturbs”
Page 3: “Applying this reasoning to data with random labels we arrive at a contradiction” — I don’t understand this point: what’s the contradiction
Section 3: W_ij should be defined explicitly. (I understand it’s clear from the picture.)
Section 4: I think it’s better to state the goal of this section at its beginning
Clarity, Quality, Novelty And Reproducibility
At a technical level, the main theoretical claims of the paper appear sound. As I discussed, though, they either are under restricted assumptions or appear insufficiently motivated in ML practice.
I have a few suggests regarding writing presentation (see above). |
ICLR | Title
SGD Through the Lens of Kolmogorov Complexity
Abstract
We initiate a thorough study of the dynamics of stochastic gradient descent (SGD) under minimal assumptions using the tools of entropy compression. Specifically, we characterize a quantity of interest which we refer to as the accuracy discrepancy. Roughly speaking, this measures the average discrepancy between the model accuracy on batches and large subsets of the entire dataset. We show that if this quantity is sufficiently large, then SGD finds a model which achieves perfect accuracy on the data in O(1) epochs. On the contrary, if the model cannot perfectly fit the data, this quantity must remain below a global threshold, which only depends on the size of the dataset and batch. We use the above framework to lower bound the amount of randomness required to allow (non-stochastic) gradient descent to escape from local minima using perturbations. We show that even if the model is extremely overparameterized, at least a linear (in the size of the dataset) number of random bits are required to guarantee that GD escapes local minima in subexponential time.
1 INTRODUCTION
Stochastic gradient descent (SGD) is at the heart of modern machine learning. However, we are still lacking a theoretical framework that explains its performance for general, non-convex functions. Current results make significant assumptions regarding the model. Global convergence guarantees only hold under specific architectures, activation units, and when models are extremely overparameterized (Du et al., 2019; Allen-Zhu et al., 2019; Zou et al., 2018; Zou and Gu, 2019). In this paper, we take a step back and explore what can be said about SGD under the most minimal assumptions. We only assume that the loss function is differentiable and L-smooth, the learning rate is sufficiently small and that models are initialized randomly. Clearly, we cannot prove general convergence to a global minimum under these assumptions. However, we can try and understand the dynamics of SGD - what types of execution patterns can and cannot happen.
Motivating example: Suppose hypothetically, that for every batch, the accuracy of the model after the Gradient Descent (GD) step on the batch is 100%. However, its accuracy on the set of previously seen batches (including the current batch) remains at 80%. Can this process go on forever? At first glance, this might seem like a possible scenario. However, we show that this cannot be the case. That is, if the above scenario repeats sufficiently often the model must eventually achieve 100% accuracy on the entire dataset.
To show the above, we identify a quantity of interest which we call the accuracy discrepancy (formally defined in Section 3). Roughly speaking, this is how much the model accuracy on a batch differs from the model accuracy on all previous batches in the epoch. We show that when this quantity (averaged over epochs) is higher than a certain threshold, we can guarantee that SGD convergence to 100% accuracy on the dataset within O(1) epochs w.h.p1. We note that this threshold is global, that is, it only depends on the size of the dataset and the size of the batch. In doing so, we provide a sufficient condition for SGD convergence.
The above result is especially interesting when applied to weak models that cannot achieve perfect accuracy on the data. Imagine a dataset of size n with random labels, a model with n0.99 parameters, and a batch of size log n. The above implies that the accuracy discrepancy must eventually go below
1With high probability means a probability of at least 1− 1/n, where n is the size of the dataset.
the global threshold. In other words, the model cannot consistently make significant progress on batches. This is surprising because even though the model is underparameterized with respect to the entire dataset, it is extremely overparameterized with respect to the batch. We verify this observation experimentally (Appendix B). This holds for a single GD step, but what if we were to allow many GD steps per batch, would this mean that we still cannot make significant progress on the batch? This leads us to consider the role of randomness in (non-stochastic) gradient descent.
It is well known that overparameterized models trained using SGD can perfectly fit datasets with random labels (Zhang et al., 2017). It is also known that when models are sufficiently overparameterized (and wide) GD with random initialization convergences to a near global minimum (Du et al., 2019). This leads to an interesting question: how much randomness does GD require to escape local minima efficiently (in polynomial time)? It is obvious that without randomness we could initialize GD next to a local minimum, and it will never escape it. However, what about the case where we are provided an adversarial input and we can perturb that input (for example, by adding a random vector to it), how many bits of randomness are required to guarantee that after the perturbation GD achieves good accuracy on the input in polynomial time?
In Section 4 we show that if the amount of randomness is sublinear in the size of the dataset, then for any differentiable and L-smooth model class (e.g., a neural network architecture), there are datasets that require an exponential running time to achieve any non-trivial accuracy (i.e., better than 1/2 + o(1) for a two-class classification task), even if the model is extremely overparameterized. This result highlights the importance of randomness for the convergence of gradient methods. Specifically, it provides an indication of why SGD converges in certain situations and GD does not. We hope this result opens the door to the design of randomness in other versions of GD.
Outline of our techniques We consider batch SGD, where the dataset is shuffled once at the beginning of each epoch and then divided into batches. We do not deal with the generalization abilities of the model. Thus, the dataset is always the training set. In each epoch, the algorithm goes over the batches one by one, and performs gradient descent to update the model. This is the "vanilla" version of SGD, without any acceleration or regularization (for a formal definition, see Section 2). For the sake of analysis, we add a termination condition after every GD step: if the accuracy on the entire dataset is 100% we terminate. Thus, in our case, termination implies 100% accuracy.
To achieve our results, we make use of entropy compression, first considered by Moser and Tardos (2010) to prove a constructive version of the Lovász local lemma. Roughly speaking, the entropy compression argument allows one to bound the running time of a randomized algorithm2 by leveraging the fact that a random string of bits (the randomness used by the algorithm) is computationally incompressible (has high Kolmogorov complexity) w.h.p. If one can show that throughout the execution of the algorithm, it (implicitly) compresses the randomness it uses, then one can bound the number of iterations the algorithm may execute without terminating. To show that the algorithm has such a property, one would usually consider the algorithm after executing t iterations, and would try to show that just by looking at an "execution log" of the algorithm and some set of "hints", whose size together is considerably smaller than the number of random bits used by the algorithm, it is possible to reconstruct all of the random bits used by the algorithm.
We apply this approach to SGD with an added termination condition when the accuracy over the entire dataset is 100%. Thus, termination in our case guarantees perfect accuracy. The randomness we compress is the bits required to represent the random permutation of the data at every epoch. So indeed the longer SGD executes, the more random bits are generated. We show that under our assumptions it is possible to reconstruct these bits efficiently starting from the dataset X and the model after executing t epochs. The first step in allowing us to reconstruct the random bits of the permutation in each epoch is to show that under the L-smoothness assumption and a sufficiently small step size, SGD is reversible. That is, if we are given a model Wi+1 and a batch Bi such that Wi+1 results from taking a gradient step with model Wi where the loss is calculated with respect to Bi, then we can uniquely retrieve Wi using only Bi and Wi+1. This means that if we can efficiently encode the batches used in every epoch (i.e., using less bits than encoding the entire permutation of the data), we can also retrieve all intermediate models in that epoch (at no additional cost). We prove this claim in Section 2.
2We require that the number of the random bits used is proportional to the execution time of the algorithm. That is, the algorithm flips coins for every iteration of a loop, rather than just a constant number at the beginning of the execution.
The crux of this paper is to show that when the accuracy discrepancy is high for a certain epoch, the batches can indeed be compressed. To exemplify our techniques let us consider the scenario where, in every epoch, just after a single GD step on a batch we consistently achieve perfect accuracy on the batch. Let us consider some epoch of our execution, assume we have access to X , and let Wf be the model at the end of the epoch. If the algorithm did not terminate, then Wf has accuracy at most 1− on the entire dataset (assume for simplicity that is a constant). Our goal is to retrieve the last batch of the epoch, Bf ⊂ X (without knowing the permutation of the data for the epoch). A naive approach would be to simply encode the indices in X of the elements in the batch. However, we can use Wf to achieve a more efficient encoding. Specifically, we know that Wf achieves 1.0 accuracy on Bf but only 1− accuracy on X . Thus it is sufficient to encode the elements of Bf using a smaller subset of X (the elements classified correctly by Wf , which has size at most (1− ) |X|). This allows us to significantly compress Bf . Next, we can use Bf and Wf together with the reversibility of SGD to retrieve Wf−1. We can now repeat the above argument to compress Bf−1 and so on, until we are able to reconstruct all of the random bits used to generate the permutation of X in the epoch. This will result in a linear reduction in the number of bits required for the encoding.
In our analysis, we show a generalized version of the scenario above. We show that high accuracy discrepancy implies that entropy compression occurs. For our second result, we consider a modified SGD algorithm that instead of performing a single GD step per batch, first perturbs the batch with a limited amount of randomness and then performs GD until a desired accuracy on the batch is reached. We assume towards contradiction that GD can always reach the desired accuracy on the batch in subexponential time. This forces the accuracy discrepancy to be high, which guarantees that we always find a model with good accuracy. Applying this reasoning to models of sublinear size and data with random labels we arrive at a contradiction, as such models cannot achieve good accuracy on the data. This implies that when we limit the amount of randomness GD can use for perturbations, there must exist instances where GD requires exponential running time to achieve good accuracy.
Related work There has been a long line of research proving convergence bounds for SGD under various simplifying assumptions such as: linear networks (Arora et al., 2019; 2018), shallow networks (Safran and Shamir, 2018; Du and Lee, 2018; Oymak and Soltanolkotabi, 2019), etc. However, the most general results are the ones dealing with deep, overparameterized networks (Du et al., 2019; Allen-Zhu et al., 2019; Zou et al., 2018; Zou and Gu, 2019). All of these works make use of NTK (Neural Tangent Kernel)(Jacot et al., 2018) and show global convergence guarantees for SGD when the hidden layers have width at least poly(n,L) where n is the size of the dataset and L is the depth of the network. We note that the exponents of the polynomials are quite large.
A recent line of work by Zhang et al. (2022) notes that in many real world scenarios models do not converge to stationary points. They instead take a different approach which, similar to us, studies the dynamics of neural networks. They show that under certain assumptions (e.g., considering a fully connected architecture with sub-differentiable and coordinate-wise Lipschitz activations and weights laying on a compact set) the change in training loss gradually converges to 0, even if the full gradient norms do not vanish.
In (Du et al., 2017) it was shown that GD can take exponential time to escape saddle points, even under random initialization. They provide a highly engineered instance, while our results hold for many model classes of interest. Jin et al. (2017) show that adding perturbations during the executions of GD guarantees that it escapes saddle points. This is done by occasionally perturbing the parameters within a ball of radius r, where r depends on the properties of the function to be optimized. Therefore, a single perturbation must require an amount of randomness linear in the number of parameters.
2 PRELIMINARIES
We consider the following optimization problem. We are given an input (dataset) of size n. Let us denote X = {xi}ni=1 (Our inputs contain both data and labels, we do not need to distinguish them for this work). We also associate every x ∈ X with a unique id of dlog ne bits. We often consider batches of the input B ⊂ X . The size of the batch is denoted by b (all batches have the same size). We have some model whose parameters are denoted by W ∈ Rd, where d is the model dimension. We aim to optimize a goal function of the following type: f(W ) = 1n ∑ x∈X fx(W ), where the functions fx : Rd → R are completely determined by x ∈ X . We also define for every set A ⊆ X: fA(W ) = 1 |A| ∑ x∈A fx(W ). Note that fX = f .
We denote by acc(W,A) : Rd × 2X → [0, 1] the accuracy of model W on the set A ⊆ X (where we use W to classify elements from X). Note that for x ∈ X it holds that acc(W,x) is a binary value indicating whether x is classified correctly or not. We require that every fx is differentiable and L-smooth: ∀W1,W2 ∈ Rd, ‖∇fx(W1)−∇fx(W2)‖ ≤ L‖W1 −W2‖. This implies that every fA is also differentiable and L-smooth. To see this consider the following:
‖∇fA(W1)−∇fA(W2)‖ = ‖ 1 |A| ∑ x∈A ∇fx(W1)− 1 |A| ∑ x∈A ∇fx(W2)‖
= 1 |A| ‖ ∑ x∈A ∇fx(W1)−∇fx(W2)‖ ≤ 1 |A| ∑ x∈A ‖∇fx(W1)−∇fx(W2)‖ ≤ L‖W1 −W2‖
We state another useful property of fA:
Lemma 2.1. Let W1,W2 ∈ Rd and α < 1/L. For any A ⊆ X , if it holds that W1 − α∇fA(W1) = W2 − α∇fA(W2) then W1 = W2.
Proof. Rearranging the terms we get thatW1−W2 = α∇fA(W1)−α∇fA(W2). Now let us consider the norm of both sides: ‖W1−W2‖ = ‖α∇fA(W1)−α∇fA(W2)‖ ≤ α·L‖W1−W2‖ < ‖W1−W2‖ Unless W1 = W2, the final strict inequality holds which leads to a contradiction.
The above means that for a sufficiently small gradient step, the gradient descent process is reversible. That is, we can always recover the previous model parameters given the current ones, assuming that the batch is fixed. We use the notion of reversibility throughout this paper. However, in practice we only have finite precision, thus instead of R we work with the finite set F ⊂ R. Furthermore, due to numerical stability issues, we do not have access to exact gradients, but only to approximate values ∇̂fA. For the rest of this paper, we assume these values are L-smooth on all elements in Fd. That is,
∀W1,W2 ∈ Fd, A ⊆ X, ‖∇̂fA(W1)− ∇̂fA(W2)‖ ≤ L‖W1 −W2‖
This immediately implies that Lemma 2.1 holds even when precision is limited. Let us state the following theorem:
Theorem 2.2. Let W1,W2, ...,Wk ∈ Fd ⊂ Rd, A1, A2, ..., Ak ⊆ X and α < 1/L. If it holds that Wi = Wi−1 − α∇̂fAi−1(Wi−1), then given A1, A2, ..., Ak−1 and Wk we can retrieve W1.
Proof. Given Wk we iterate over all W ∈ Fd until we find W such that Wk = W − α∇̂fAi−1(W ). Using Lemma 2.1, there is only a single element such that this equality holds, and thus W = Wk−1. We repeat this process until we retrieve W1.
SGD We analyze the classic SGD algorithm presented in Algorithm 1. One difference to note in our algorithm, compared to the standard implementation, is the termination condition when the accuracy on the dataset is 100%. In practice the termination condition is not used, however, we only use it to prove that at some point in time the accuracy of the model is 100%.
Algorithm 1: SGD 1 i← 1 // epoch counter 2 W1,1 is an initial model 3 while True do 4 Take a random permutation of X , divided into batches {Bi,j}n/bj=1 5 for j from 1 to n/b do 6 if acc(Wi,j , X) = 1 then Return Wi,j 7 Wi,j+1 ←Wi,j − α∇fBi,j (Wi,j) 8 i← i+ 1, Wi,1 ←Wi−1,n/b+1
Kolmogorov complexity The Kolmogorov complexity of a string x ∈ {0, 1}∗, denoted by K(x), is defined as the size of the smallest prefix Turing machine which outputs this string. We note that this definition depends on which encoding of Turing machines we use. However, one can show that this will only change the Kolmogorov complexity by a constant factor (Li and Vitányi, 2019).
We also use the notion of conditional Kolmogorov complexity, denoted by K(x | y). This is the length of the shortest prefix Turing machine which gets y as an auxiliary input and prints x. Note that the length of y does not count towards the size of the machine which outputs x. So it can be the case that |x| |y| but it holds that K(x | y) < K(x). We can also consider the Kolmogorov complexity of functions. Let g : {0, 1}∗ → {0, 1}∗ then K(g) is the size of the smallest Turing machine which computes the function g.
The following properties of Kolmogorov complexity will be of use. Let x, y, z be three strings:
• Extra information: K(x | y, z) ≤ K(x | z) +O(1) ≤ K(x, y | z) +O(1) • Subadditivity: K(xy | z) ≤ K(x | z, y)+K(y | z)+O(1) ≤ K(x | z)+K(y | z)+O(1)
Random strings have the following useful property (Li and Vitányi, 2019): Theorem 2.3. For an n bit string x chosen uniformly at random, and some string y independent of x (i.e., y is fixed before x is chosen) and any c ∈ N it holds that Pr[K(x | y) ≥ n− c] ≥ 1− 1/2c.
Entropy and KL-divergence Our proofs make extensive use of binary entropy and KL-divergence. In what follows we define these concepts and provide some useful properties.
Entropy: For p ∈ [0, 1] we denote by h(p) = −p log p− (1− p) log(1− p) the entropy of p. Note that h(0) = h(1) = 0.
KL-divergence: For p, q ∈ (0, 1) let DKL(p ‖ q) = p log pq + (1 − p) log 1−p 1−q be the Kullback Leibler divergence (KL-divergence) between two Bernoulli distributions with parameters p, q. We also extend the above for the case where q, p ∈ {0, 1} as follows: DKL(1 ‖ q) = DKL(0 ‖ q) = 0, DKL(p ‖ 1) = log(1/p), DKL(p ‖ 0) = log(1/(1 − p)). This is just notation that agrees with Lemma 2.4. We also state the following result of Pinsker’s inequality applied to Bernoulli random variables: DKL(p ‖ q) ≥ 2(p− q)2. Representing sets Let us state some useful bounds on the Kolmogorov complexity of sets. A more detailed explanation regarding the Kolmogorov complexity of sets and permutations together with the proof to the lemma below appears in Appendix A. Lemma 2.4. Let A ⊆ B, |B| = m, |A| = γm, and let g : B → {0, 1}. For any set Y ⊆ B let Y1 = {x | x ∈ Y, g(x) = 1} , Y0 = Y \ Y1 and κY = |Y1||Y | . It holds that
K(A | B, g) ≤ mγ(log(e/γ)−DKL(κB ‖ κA)) +O(logm)
3 ACCURACY DISCREPANCY
First, let us define some useful notation (Wi,j , Bi,j are formally defined in Algorithm 1):
• λi,j = acc(Wi,j , X). This is the accuracy of the model in epoch i on the entire dataset X , before performing the GD step on batch j.
• ϕi,j = acc(Wi,j , Bi,j−1). This is the accuracy of the model on the (j − 1)-th batch in the i-th epoch after performing the GD step on the batch.
• Xi,j = ⋃j k=1Bi,k (note that ∀i,Xi,0 = ∅, Xi,n/b = X). This is the set of elements in the first j
batches of epoch i. Let us also denote nj = |Xi,j | = jb (Note that ∀j, i1, i2, |Xi1,j | = |Xi2,j |, thus i need not appear in the subscript).
• λ′i,j = acc(Wi,j , Xi,j−1), λ ′′ i,j = acc(Wi,j , X \Xi,j−1), where λ′i,j is the accuracy of the model
on the set of all previously seen batch elements, after performing the GD step on the (j − 1)-th batch and λ′′i,j is the accuracy of the same model, on all remaining elements (j-th batch onward). To avoid computing the accuracy on empty sets, λ′i,j is defined for j ∈ [2, n/b + 1] and λ′′i,j is defined for j ∈ [1, n/b]. • ρi,j = DKL(λ′i,j ‖ ϕi,j) is the accuracy discrepancy for the j-th batch in iteration i and ρi =∑n/b+1 j=2 ρi,j is the accuracy discrepancy at iteration i.
In our analysis, we consider t epochs of the SGD algorithm. Our goal for this section is to derive a connection between ∑t i=1 ρi and t. Bounding t: Our goal is to use the entropy compression argument to show that if ∑t i=1 ρi is sufficiently large we can bound t. Let us start by formally defining the random bits which the algorithm uses. Let ri be the string of random bits representing the random permutation of X at epoch i. As we consider t epochs, let r = r1r2 . . . rt.
Note that the number of bits required to represent an arbitrary permutation of [n] is given by: dlog(n!)e = n log n− n log e+O(log n) = n log(n/e) +O(log n),
where in the above we used Stirling’s approximation. Thus, it holds that |r| = t(n log(n/e) + O(log n)) and according to Theorem 2.3, with probability at least 1 − 1/n2 it holds that K(r) ≥ tn log(n/e)−O(log n). In the following lemma we show how to use the model at every iteration to efficiently reconstruct the batch at that iteration, where the efficiency of reconstruction is expressed via ρi. Lemma 3.1. It holds w.h.p that ∀i ∈ [t] that: K(ri |Wi+1,1, X) ≤ n log ne − bρi + n b ·O(log n)
Proof. Recall that Bi,j is the j-th batch in the i-th epoch, and let Pi,j be a permutation of Bi,j such that the order of the elements in Bi,j under Pi,j is the same as under ri. Note that given X , if we know the partition into batches and all permutations, we can reconstruct ri. According to Theorem 2.2, given Wi,j and Bi,j−1 we can compute Wi,j−1. Let us denote by Y the encoding of this procedure. To implement Y we need to iterate over all possible vectors in Fd and over batch elements to compute the gradients. To express this program we require auxiliary variables of size at most O(log min {d, b}) = O(log n). Thus it holds that K(Y ) = O(log n). Let us abbreviate Bi,1, Bi,2, ..., Bi,j as (Bi,k) j k=1. We write the following. K(ri | X,Wi+1,1) ≤ K(ri, Y | X,Wi+1,1) +O(1) ≤ K(ri | X,Wi+1,1, Y ) +K(Y | X,Wi+1,1) +O(1)
≤ O(log n) +K((Bi,k, Pi,k)n/bk=1 | X,Wi+1,1, Y )
≤ O(log n) +K((Bi,k)n/bk=1 | X,Wi+1,1, Y ) +K((Pi,k) n/b k=1 | X,Wi+1,1, Y )
≤ O(log n) +K((Bi,k)n/bk=1 | X,Wi+1,1, Y ) + n/b∑ j=1 K(Pi,j)
Let us bound K((Bi,k) n/b k=1 | X,Wi+1,1, Y ) by repeatedly using the subadditivity and extra information properties of Kolmogorov complexity.
K((Bi,k) n/b k=1 | X,Y,Wi+1,1) ≤ K(Bi,n/b | X,Wi+1,1) +K((Bi,k) n/b−1 k=1 | X,Y,Wi+1,1, Bi,n/b) +O(1)
≤ K(Bi,n/b | X,Wi+1,1) +K((Bi,k) n/b−1 k=1 | X,Y,Wi,n/b, Bi,n/b) +O(1) ≤ K(Bi,n/b | X,Wi+1,1) +K(Bi,n/b−1 | X,Wi,n/b, Bi,n/b)
+K((Bi,k) n/b−2 k=1 | X,Y,Wi,n/b−1, Bi,n/b, Bi,n/b−1) +O(1)
≤ ... ≤ O(n b ) + n/b∑ j=1 K(Bi,j | X,Wi,j+1, (Bi,k)n/bk=j+1) ≤ O( n b ) + n/b∑ j=1 K(Bi,j | Xi,j ,Wi,j+1)
where in the transitions we used the fact that given Wi,j , Bi,j−1 and Y we can retrieve Wi,j−1. That is, we can always bound K(... | Y,Wi,j , Bi,j−1, ...) by K(... | Y,Wi,j−1, Bi,j−1, ...) +O(1). To encode the order Pi,j inside each batch, b log(b/e) +O(log b) bits are sufficient. Finally we get that: K(ri | X,Wi+1,1) ≤ O(nb ) + ∑n/b j=1[K(Bi,j | Xi,j ,Wi,j+1) + b log(b/e) +O(log b)].
Let us now boundK(Bi,j−1 | Xi,j−1,Wi,j). KnowingXi,j−1 we know thatBi,j−1 ⊆ Xi,j−1. Thus we need to use Wi,j to compress Bi,j−1. Applying Lemma 2.4 with parameters A = Bi,j−1, B = Xi,j−1, γ = b/nj−1, κA = ϕi,j , κB = λ ′ i,j and g(x) = acc(Wi,j , x). We get the following:
K(Bi,j−1 | Xi,j−1,Wi,j) ≤ b(log( e · nj−1
b )− ρi,j) +O(log nj−1)
Adding b log(b/e) +O(log b) to the above, we get the following bound on every element in the sum: b(log( e · nj−1
b )− ρi,j) + b log(b/e) +O(log b) +O(log nj−1) ≤ b log nj−1 − bρi,j +O(log nj−1)
Note that the most important term in the sum is −bρi,j . That is, the more the accuracy of Wi,j on the batch, Bi,j−1, differs from the accuracy of Wi,j on the set of elements containing the batch, Xi,j−1, we can represent the batch more efficiently. Let us now bound the sum: ∑n/b+1 j=2 [b log nj−1 − bρi,j + O(log nj−1)]. Let us first bound the sum over b log nj−1: n/b+1∑ j=2 b log nj−1 = n/b∑ j=1 b log jb = n/b∑ j=1 b(log b+ log j)
= n log b+ b log(n/b)! = n log b+ n log n
b · e +O(log n) = n log
n e +O(log n)
Finally, we can write that:
K(ri | X,Wi+1,1) ≤ O( n
b ) + n/b+1∑ j=2 [b log nj−1 − bρi,j +O(log n)] ≤ n log n e − bρi + n b ·O(log n)
Using the above we know that when the value ρi is sufficiently high, the random permutation of the epoch can be compressed. We use the fact that random strings are incompressible to bound 1 t ∑t i=1 ρi. Theorem 3.2. If the algorithm does not terminate by the t-th iteration, then it holds w.h.p that ∀t, 1t ∑t i=1 ρi ≤ O( n logn b2 ).
Proof. Using arguments similar to Lemma 3.1, we can show that K(r,W1,1 | X) ≤ K(Wt+1,1) + O(t)+ ∑t k=1K(rk | X,Wk+1,1) (formally proved in Lemma A.3). Combining this with Lemma 3.1, we get that K(r,W1,1 | X) ≤ K(Wt+1,1) + t[n(log(n/e) + n·O(logn)b − bρi +O(log n)]. Our proof implies that we can reconstruct not only r, but also W1,1 using X,Wt+1,1. Due to the incompressibility of random strings, we get that w.h.pK(r,W1,1 | X) ≥ d+tn log(n/e)−O(log n). Combining the lower and upper bound for K(r,W1,1 | X) we can get the following inequality:
d+ tn log(n/e)−O(log n) ≤ d+ t[n(log(n/e) + n ·O(log n) b +O(log n)]− t∑ i=1 bρi (1)
=⇒ 1 t t∑ i=1 ρi ≤ n ·O(log n) b2 + O(log n)
b︸ ︷︷ ︸ β(n,b)
+ O(log n)
bt = O(
n log n
b2 )
Let β(n, b) be the exact value of the asymptotic expression in Inequality 1. Theorem 3.2 says that as long as SGD does not terminate the average accuracy discrepeancy cannot be too high. Using the contra-positive we get the following useful corollary (proof is deferred to Appendix A.3).
Corollary 3.3. If ∀k, 1k ∑k i=1 ρi > β(n, b) + γ, for γ = Ω(b
−1 log n), then w.h.p SGD terminates within O(1) epochs.
The case for weak models Using the above we can also derive some interesting negative results when the model is not expressive enough to get perfect accuracy on the data. It must be the case that the average accuracy discrepancy tends below β(n, b) over time. We verify this experimentally on the MNIST dataset (Appendix B), showing that the average accuracy indeed drops over time when the model is weak compared to the dataset. We also confirm that the dependence of the threshold in b is indeed inversely quadratic.
4 THE ROLE OF RANDOMNESS IN GD INITIALIZATION
Our goal for this section is to show that when the amount of randomness in the perturbation is too small, for any model architecture which is differentiable and L-smooth there are inputs for which Algorithm 2 requires exponential time to terminate, even for extremely overparameterized models.
Perturbation families Let us consider a family of 2` functions indexed by length ` real valued vectors Ψ` = {ψz}z∈R` . Recall that throughout this paper we assume finite precision, thus every z can be represented using O(`) bits. We say that Ψ` is a reversible perturbation family if it holds that ∀z ∈ R`, ψz is one-to-one. We often use the notation Ψ`(W ), which means pick z ∈ R` uniformly at random, and apply ψz(W ). We often refer to Ψ` as simply a perturbation.
We note that the above captures a wide range of natural perturbations. For example ψz(W ) = W+Wz where Wz[i] = z[i mod `]. Clearly ψz(W ) is reversible.
Gradient descent The GD algorithm we analyze is formally given in Algorithm 2.
Algorithm 2: GD(W,Y, δ) Input: initial model W , dataset Y , desired accuracy δ 1 i = 1, T = o(2m) + poly(d) 2 W = Ψ`(W ) 3 while acc(W,Y ) < δ and i < T do 4 W ←W − α∇fY (W ) 5 i← i+ 1 6 Return W
Let us denote by m the number of elements in Y . We make the following 2 assumptions for the rest of this section: (1) ` = o(m). (2) There exists T = o(2m) + poly(d) and a perturbation family Ψ` such that for every input W,Y within T iterations GD terminates and returns a solution that has at least δ accuracy on Y with constant probability. We show that the above two assumptions cannot hold together. That is, if the amount of randomness is sublinear in m, there must be instances with exponential running time, even when d m. To show the above, we define a variant of SGD, which uses GD as a sub procedure (Algorithm 3). Assume that our data set is a binary classification task (it is easy to generalize our results to any number of classes), and that elements in X are assigned random labels. Furthermore, let us assume that d = o(n), e.g., d = n0.99. It holds that w.h.p we cannot train a model with d parameters that achieves any accuracy better than 1/2 + o(1) on X (Lemma A.4). Let us take to be a small constant. We show that if assumptions 1 and 2 hold, then Algorithm 3 must terminate and return a model with 1/2 + Θ(1) accuracy on X , leading to a contradiction. Our analysis follows the same line as the previous section, and uses the same notation.
Reversibility First, we must show that Algorithm 3 is still reversible. Note that we can take the same approach as before, where the only difference is that in order to get Wi,j from Wi,j+1 we must now get all the intermediate values from the call to GD. As the GD steps are applied to the same batch, this amounts to applying Lemma 2.1 several times instead of once per iteration. More specifically, we must encode for every batch a number Ti,j = o(2b) + poly(d) = o(2b) + poly(n) (recall that d = o(n)) and apply Lemma 2.1 Ti,j times.
This results in ψz(Wi,j). If we know z,Ψ` then we can retrieve ψz and efficiently retrieve Wi,j using only O(log d) = O(log n) additional bits (by iterating over all values in Fd). Therefore, in every
Algorithm 3: SGD’ 1 i← 1 // epoch counter 2 W1,1 is an initial model 3 while True do 4 Take a random permutation of X , divided into batches {Bi,j}n/bj=1 5 for j from 1 to n/b do 6 if acc(Wi,j , X) ≥ 1/2(1− ) then Return Wi,j 7 Wi,j+1 ← GD(Wi,j , Bi,j , 12(1−2 ) ) 8 i← i+ 1, Wi,1 ←Wi−1,n/b+1
iteration we have the following additional terms: log T +O(log n) + ` = o(b) +O(log n). Summing over n/b iterations we get o(n) per epoch. We state the following Lemma analogous to Lemma 3.1. Lemma 4.1. For Algorithm 3 it holds w.h.p that ∀i ∈ [t] that: K(ri |Wi+1,1, X,Ψ`) ≤ n log ne − bρi + β(n, b) + o(n).
We show that under our assumptions, Algorithm 3 must terminate, leading to a contradiction. Lemma 4.2. Algorithm 3 with b = Ω(log n) terminates within O(T ) iterations w.h.p.
Proof. Our goal is to lower bound ρi = ∑n/b+1 j=2 DKL(λ ′ i,j ‖ ϕi,j). Let us first upper bound λ′i,j . Using the fact that λ′i,j ≤ nλi,j (j−1)b (Lemma A.5) combined with the fact that λi,j ≤ 1/2(1− ) as long as the algorithm does not terminate, we get that ∀j ∈ [2, n/b+ 1] it holds that λ′i,j ≤ n2(1− )(j−1)b . Using the above we conclude that as long as we do not terminate it must hold that λ′i,j ≤ 12(1− )2 whenever j ∈ I = [(1− )n/b+ 1, n/b+ 1]. That is, λ′i,j must be close to λi,j towards the end of the epoch, and therefore must be sufficiently small. Note that |I| ≥ n/b. We know that as long as the algorithm does not terminate it holds that ϕi,j > 1/2(1− 2 ) with some constant probability. Furthermore, this probability is taken over the randomness used in the call to GD (the randomness of the perturbation). This fact allows us to use Hoeffding-type bounds for the ϕi,j variables. If ϕi,j > 1/2(1− 2 ) we say that it is good. Therefore in expectation a constant fraction of ϕi,j , j ∈ I are good. Applying a Hoeffding type bound we get that w.h.p a constant fraction of ϕi,j , j ∈ I are good. Denote these good indices by Ig ⊆ I . We are now ready to bound ρi.
ρi = n/b+1∑ j=2 DKL(λ ′ i,j ‖ ϕi,j) ≥ ∑ j∈Ig DKL(λ ′ i,j ‖ ϕi,j) ≥ ∑ j∈Ig DKL( 1 2(1− )2 ‖ 1 2(1− 2 ) )
≥ Θ(n b ) · ( 1 2(1− 2 ) − 1 2(1− )2 )2 = Θ( n b ) · 5 = Θ(n b )
Where in the transitions we used the fact that KL-divergence is non-negative, and Pinsker’s inequality. Finally, requiring that b = Ω(log n) we get that bρi−β(n, b)− o(n) = Θ(n)−Θ(n lognlog2 n )− o(n) = Θ(n). Following the same calculation as in Corollary 3.3, this guarantees termination within O( lognn ) epochs, or O(T · nb · logn n ) = O(T ) iterations (gradient descent steps).
The above leads to a contradiction. It is critical to note that the above does not hold if T = 2m = 2b or if ` = Θ(n), as both would imply that the o(n) term becomes Θ(n). We state our main theorem: Theorem 4.3. For any differentiable and L-smooth model class with d parameters and a perturbation class Ψ` such that ` = o(m) there exist an input data set Y of size m such that GD requires Ω(2m) iterations to achieve δ accuracy on Y , even if δ = 1/2 + Θ(1) and d m.
A OMITTED PROOFS AND EXPLENATIONS
A.1 REPRESENTING SETS AND PERMUTATIONS
Throughout this paper, we often consider the value K(A) where A is a set. Here the program computing A need only output the elements of A (in any order). When considering K(A | B) such that A ⊆ B, it holds that K(A | B) ≤ dlog (|B| |A| ) e+O(log |B|). To see why, consider Algorithm 4. In the algorithm iA is the index of A when considering some ordering of all subsets of B of size |A|. Thus dlog (|B| |A| ) e bits are sufficient to represent iA. The remaining variables i,mA,mB and any
Algorithm 4: Compute A given B as input 1 mA ← |A| ,mB ← |B| , i← 0, iA is a target index 2 for every subset C ⊆ B s.t |C| = mA (in a predetermined order) do 3 if i = iA then Print C 4 i← i+ 1
additional variables required to construct the set C are all of size at most O(log |B|) and there is at most a constant number of them.
During our analysis, we often bound the Kolmogorov complexity of tuples of objects. For example, K(A,P | B) where A ⊆ B is a set and P : A → [|A|] is a permutation of A (note that A,P together form an ordered tuple of the elements of A). Instead of explicitly presenting a program such as Algorithm 4, we say that if K(A | B) ≤ c1 and c2 bits are sufficient to represent P , thus K(A,P | B) ≤ c1 + c2 +O(1). This just means that we directly have a variable encoding P into the program that computes A given B and uses it in the code. For example, we can add a permutation to Algorithm 4 and output an ordered tuple of elements rather than a set. Note that when representing a permutation of A, |A| = k, instead of using functions, we can just talk about values in dlog k!e. That is, we can decide on some predetermined ordering of all permutations of k elements, and represent a permutation as its number in this ordering.
A.2 OMITTED PROOFS FOR SECTION 2
Lemma A.1. For p ∈ [0, 1] it holds that h(p) ≤ p log(e/p).
Proof. Let us write our lemma as:
h(p) = −p log p− (1− p) log(1− p) ≤ p log(e/p)
Rearranging we get:
− (1− p) log(1− p) ≤ p log p+ p log(1/p) + p log e =⇒ −(1− p) log(1− p) ≤ p log e
=⇒ − ln(1− p) ≤ p (1− p)
Note that − ln(1− p) = ∫ p 0 1 (1−x)dx ≤ p · 1 (1−p) . Where in the final transition we use the fact that
1 (1−x) is monotonically increasing on [0, 1]. This completes the proof.
Lemma A.2. For p, γ, q ∈ [0, 1] where pγ ≤ q, (1− p)γ ≤ (1− q) it holds that
qh( pγ q ) + (1− q)h( (1− p)γ (1− q) ) ≤ h(γ)− γDKL(p ‖ q)
Proof. Let us expand the left hand side using the definition of entropy:
qh( pγ q ) + (1− q)h( (1− p)γ (1− q) )
= −q(pγ q log pγ q + (1− pγ q ) log(1− pγ q ))
− (1− q)( (1− p)γ (1− q) log (1− p)γ (1− q) + (1− (1− p)γ (1− q) ) log(1− (1− p)γ (1− q) ))
= −(pγ log pγ q + (q − pγ) log q − pγ q )
− ((1− p)γ log (1− p)γ (1− q) + ((1− q)− (1− p)γ) log (1− q)− (1− p)γ 1− q )
= −γ log γ − γDKL(p ‖ q)
− (q − pγ)(log q − pγ q )− ((1− q)− (1− p)γ) log (1− q)− (1− p)γ 1− q )
Where in the last equality we simply sum the first terms on both lines. To complete the proof we use the log-sum inequality for the last expression. The log-sum inequality states that: Let {ak}mk=1 , {bk} m k=1 be non-negative numbers and let a = ∑m k=1 ak, b = ∑m k=1 bk, then ∑m k=1 ai log ai bi ≥ a log ab . We apply the log-sum inequality with m = 2, a1 = q − pγ, a2 = (1 − q) − (1 − p)γ, a = 1 − γ and b1 = q, b2 = 1− q, b = 1, getting that:
(q − pγ)(log q − pγ q ) + ((1− q)− (1− p)γ) log (1− q)− (1− p)γ 1− q ) ≥ (1− γ) log(1− γ)
Putting everything together we get that
− γ log γ − γDKL(p ‖ q)
− (q − pγ)(log q − pγ q )− ((1− q)− (1− p)γ) log (1− q)− (1− p)γ 1− q )
≤ −γ log γ − (1− γ) log(1− γ)− γDKL(p ‖ q) = h(γ)− γDKL(p ‖ q)
Lemma 2.4. Let A ⊆ B, |B| = m, |A| = γm, and let g : B → {0, 1}. For any set Y ⊆ B let Y1 = {x | x ∈ Y, g(x) = 1} , Y0 = Y \ Y1 and κY = |Y1||Y | . It holds that
K(A | B, g) ≤ mγ(log(e/γ)−DKL(κB ‖ κA)) +O(logm)
Proof. The algorithm is very similar to Algorithm 4, the main difference is that we must first compute B1, B0 from B using g, and select A1, A0 from B1, B0, respectively, using two indices iA1 , iA0 . Finally we print A = A1 ∪A0. We can now bound the number of bits required to represent iA1 , iA0 . Note that |B1| = κBm, |B0| = (1− κB)m. Note that for A1 we pick γκAm elements from κBm elements and for A0 we pick γ(1− κA)m elements from (1− κB)m elements. The number of bits required to represent this selection is:
dlog ( κBm
γκAm
) e+ dlog ( (1− κB)m γ(1− κA)m ) e ≤ κBmh( γκA κB ) + (1− κB)mh( γ(1− κA) (1− κB) )
≤ m(h(γ)− γDKL(κB ‖ κA)) ≤ mγ(log(e/γ)−DKL(κB ‖ κA)) Where in the first inequality we used the fact that ∀0 ≤ k ≤ n, log ( n k ) ≤ nh(k/n), Lemma A.2 in the second transition, and Lemma A.1 in the third transition. Note that when κA = 0, 1 We only have one term of the initial sum. For example, for κA = 1 we get:
dlog ( κBm
γκAm
) e = dlog ( κBm
γm
) e ≤ κBmh( γ
κB )
≤ mγ log(eκB/γ) = mγ(log(e/γ)− log(1/κB))
And similar computation yields mγ(log(e/γ)− log(1/(1−κB))) for κA = 0. Finally, the additional O(logm) factor is due to various counters and variables, similarly to Algorithm 4.
A.3 OMITTED PROOFS FOR SECTION 3
Lemma A.3. It holds that K(r,W1,1 | X) ≤ K(Wt+1,1) +O(t) + ∑t k=1K(rk | X,Wk+1,1).
Proof. Similarly to the definition of Y in Lemma 3.1, let Y ′ be the program which receives X, ri,Wi+1,1 as input and repeatedly applies Theorem 2.2 to retrieve Wi,1. As Y ′ just needs to reconstruct all batches from X, ri and call Y for n/b times, it holds that K(Y ′) = O(log n). Using the subadditivity and extra information properties of K(), together with the fact that W1,1 can be reconstructed given X,Wt+1,1, Y ′, we write the following:
K(r | X) ≤ K(r,W1,1, Y ′,Wt+1,1 | X) +O(1) ≤ K(W1,1,,Wt+1,1, Y ′ | X) +K(r | X,Y ′,Wt+1,1) +O(1) ≤ K(Wt+1,1 | X) +K(r | X,Y ′,Wt+1,1) +O(log n)
First, we note that: ∀i ∈ [t − 1],K(ri | X,Y ′,Wi+2,1, ri+1) ≤ K(ri | X,Y ′,Wi+1,1) + O(1). Where in the last inequality we simply execute Y ′ on X,Wi+2,1, ri+1 to get Wi+1,1. Let us write:
K(r1r2 . . . rt | X,Y ′,Wt+1,1) ≤ K(rt | X,Y ′,Wt+1,1) +K(r1r2 . . . rt−1 | X,Y ′,Wt+1,1, rt) +O(1) ≤ K(rt | X,Wt+1,1) +K(r1r2 . . . rt−1 | X,Y ′,Wt,1) +O(1) ≤ K(rt | X,Wt+1,1) +K(rt−1 | X,Wt,1) +K(r1r2 . . . rt−2 | X,Y ′,Wt−1,1) +O(1)
≤ · · · ≤ O(t) + t∑
k=1
K(rk | X,Wk+1,1)
Combining everything together we get that:
K(r | X) ≤ K(Wt+1,1) +O(t) + t∑
k=1
K(rk | X,Wk+1,1)
Corollary 3.3. If ∀k, 1k ∑k i=1 ρi > β(n, b) + γ, for γ = Ω(b
−1 log n), then w.h.p SGD terminates within O(1) epochs.
Proof. Let us simplify Inequality 1.
d+ tn log(n/e)−O(log n) ≤ d+ t[n(log(n/e) + n ·O(log n) b +O(log n)]− t∑ i=1 bρi
=⇒ −O(log n) ≤ t[n ·O(log n) b +O(log n)]− t∑ i=1 bρi
=⇒ ( t∑ i=1 ρi)− tβ(n, b) ≤ O(log n)/b
Our condition implies that ∑t i=1 ρi > t(β(n, b) + γ). This allows us to rewrite the above inequality as:
tγ ≤ O(log n)/b =⇒ t = O(1)
A.4 OMITTED PROOFS FOR SECTION 4
Lemma A.4. Let X be some set of size n and let f : X → {0, 1} be a random binary function. It holds w.h.p that there exists no function g : X → {0, 1} such that K(g | X) = o(n) and g agrees with f on n(1/2 + Θ(1)) elements in X .
Proof. Let us assume that g agrees with f on all except n elements in X and bound . Using Theorem 2.3, it holds w.h.p that K(f | X) > n−O(log n). We show that if is sufficiently far from 1/2, we can use g to compress f below its Kolmogorov complexity, arriving at a contradiction.
We can construct f using g and the set of values on which they do not agree, which we denote by D. This set is of size n and therefore can be encoded using log ( n n ) ≤ nh( ) bits (recall that
∀0 ≤ k ≤ n, log ( n k ) ≤ nh(k/n)) given X (i.e., K(D | X) ≤ nh( )). To compute f(x) using D, g we simply check if x ∈ D and output g(x) or 1− g(x) accordingly. The total number of bits required for the above is K(g,D | X) ≤ o(n) + nh( ) (where auxiliary variables are subsumed in the o(n) term). We conclude that K(f | X) ≤ o(n) + nh( ). Combining the upper and lower bounds on K(f | X), it must hold that o(n) + nh( ) ≥ n−O(log n) =⇒ h( ) ≥ 1− o(1). This inequality only holds when = 1/2 + o(1).
Lemma A.5. It holds that 1− n(1−λi,j)(j−1)b ≤ λ ′ i,j ≤ nλi,j (j−1)b .
Proof. We can write the following for j ∈ [2, n/b+ 1]: nλi,j = ∑ x∈X acc(Wi,j , x) = ∑ x∈Xi,j−1 acc(Wi,j , x) + ∑ x∈X\Xi,j−1 acc(Wi,j , x)
= (j − 1)bλ′i,j + (n− (j − 1)b)λ′′i,j
=⇒ λ′i,j = nλi,j − (n− (j − 1)b)λ′′i,j
(j − 1)b
Setting λ′′i,j = 0 we get
λ′i,j = nλi,j − (n− (j − 1)b)λ′′i,j
(j − 1)b ≤ nλi,j (j − 1)b
And setting λ′′i,j = 1 we get
λ′i,j = nλi,j − (n− (j − 1)b)λ′′i,j (j − 1)b ≥ 1− n(1− λi,j) (j − 1)b
B EXPERIMENTS
Experimental setup We perform experiments on MNIST dataset and the same data set with random labels (MNIST-RAND). We use SGD with learning rate 0.01 without momentum or regularization. We use a simple fully connected architecture with a single hidden layer, GELU activation units (a differentiable alternative to ReLU) and cross entropy loss. We run experiments with a hidden layer of size 2, 5, 10. We consider batches of size 50, 100, 200. For each of the datasets we run experiments for all configurations of architecture sizes and batch sizes for 300 epochs.
Results Figure 2 and Figure 3 show the accuracy discrepancy and accuracy over epochs for all configurations for MNIST and MNIST-RAND respectively. Figure 4 and Figure 5 show for every batch size the accuracy discrepancy of all three model sizes on the same plot. All of the values displayed are averaged over epochs, i.e., the value for epoch t is 1t ∑ i xi.
First, we indeed observe that the scale of the accuracy discrepancy is inversely quadratic in the batch size, as our analysis suggests. Second, for MNIST-RAND we can clearly see that the average accuracy discrepancy tends below a certain threshold over time, where the threshold appears to be independent of the number of model parameters. We see similar results for MNIST when the model is small, but not when it is large. This is because the model does not reach its capacity within the timeframe of our experiment. | 1. What is the main contribution of the paper regarding batch SGD?
2. What are the strengths and weaknesses of the proposed approach?
3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
4. Are there any concrete learning tasks for which the results of the paper are easily illustrated?
5. How does the setting of the paper impact its applicability to real-world situations? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper studies batch SGD using the theory of Kolmogorov complexity to deduce novel observations about the behavior of SGD. For instance, the paper shows that certain patterns of classification accuracy obtained on the batches would allow the randomness of the batches (the permutation used to select the batches) to be compressed, and this implies a bound on the number of iterations SGD requires before achieving 100% test accuracy on the entire training data set. The paper also considers a lower bound on the amount of randomness needed to escape local minima.
Strengths And Weaknesses
I found the paper to be very interesting and refreshing, as it adopts a perspective on SGD that I had not considered before. I also appreciate the focus on minimal assumptions, as it seems important to understand the general behavior of SGD.
I believe that the main weakness of the paper is that it is quite abstract, and it is not clear to what extent the findings are applicable to real-world situations. To this extent, more concrete examples in the main text would be appreciated (e.g., are there concrete learning tasks for which the results of the paper are easily illustrated?). Also, it is worth noting that the setting may be too general, and the results of the paper may not be typical for situations encountered in practice.
Clarity, Quality, Novelty And Reproducibility
The paper is very novel and generally of good quality. However, the clarity of the paper could be improved by discussing the implications more explicitly. I also point out some typos:
Throughout the paper: “minimas” should be “minima”
Pg. 1 footnote, “mean” -> “means”
Pg. 2: “perturbe”
Pg. 2, “The randomness we compress are the bits” -> “The randomness we compress is the bits”
Pg. 3, “Showing that indeed…” is a sentence fragment
Pg. 3, “pertrubes”
Pg. 3, by R[0,1] do you mean [0, 1]?
Pg. 5, “private case of Pinsker’s inequality” what does this mean?
Pg. 6, “Where in the above…” the word “Where” should not be capitalized; similarly in the paragraph below
Pg. 8, “Even for extremely…” is a sentence fragment
Pg. 8, “there must be instances with exponential running time” This is not clear from the paragraph; just because something is not polynomial does not necessarily mean it is exponential.
Thm. 4.3, “Even if…” is a sentence fragment |
ICLR | Title
SGD Through the Lens of Kolmogorov Complexity
Abstract
We initiate a thorough study of the dynamics of stochastic gradient descent (SGD) under minimal assumptions using the tools of entropy compression. Specifically, we characterize a quantity of interest which we refer to as the accuracy discrepancy. Roughly speaking, this measures the average discrepancy between the model accuracy on batches and large subsets of the entire dataset. We show that if this quantity is sufficiently large, then SGD finds a model which achieves perfect accuracy on the data in O(1) epochs. On the contrary, if the model cannot perfectly fit the data, this quantity must remain below a global threshold, which only depends on the size of the dataset and batch. We use the above framework to lower bound the amount of randomness required to allow (non-stochastic) gradient descent to escape from local minima using perturbations. We show that even if the model is extremely overparameterized, at least a linear (in the size of the dataset) number of random bits are required to guarantee that GD escapes local minima in subexponential time.
1 INTRODUCTION
Stochastic gradient descent (SGD) is at the heart of modern machine learning. However, we are still lacking a theoretical framework that explains its performance for general, non-convex functions. Current results make significant assumptions regarding the model. Global convergence guarantees only hold under specific architectures, activation units, and when models are extremely overparameterized (Du et al., 2019; Allen-Zhu et al., 2019; Zou et al., 2018; Zou and Gu, 2019). In this paper, we take a step back and explore what can be said about SGD under the most minimal assumptions. We only assume that the loss function is differentiable and L-smooth, the learning rate is sufficiently small and that models are initialized randomly. Clearly, we cannot prove general convergence to a global minimum under these assumptions. However, we can try and understand the dynamics of SGD - what types of execution patterns can and cannot happen.
Motivating example: Suppose hypothetically, that for every batch, the accuracy of the model after the Gradient Descent (GD) step on the batch is 100%. However, its accuracy on the set of previously seen batches (including the current batch) remains at 80%. Can this process go on forever? At first glance, this might seem like a possible scenario. However, we show that this cannot be the case. That is, if the above scenario repeats sufficiently often the model must eventually achieve 100% accuracy on the entire dataset.
To show the above, we identify a quantity of interest which we call the accuracy discrepancy (formally defined in Section 3). Roughly speaking, this is how much the model accuracy on a batch differs from the model accuracy on all previous batches in the epoch. We show that when this quantity (averaged over epochs) is higher than a certain threshold, we can guarantee that SGD convergence to 100% accuracy on the dataset within O(1) epochs w.h.p1. We note that this threshold is global, that is, it only depends on the size of the dataset and the size of the batch. In doing so, we provide a sufficient condition for SGD convergence.
The above result is especially interesting when applied to weak models that cannot achieve perfect accuracy on the data. Imagine a dataset of size n with random labels, a model with n0.99 parameters, and a batch of size log n. The above implies that the accuracy discrepancy must eventually go below
1With high probability means a probability of at least 1− 1/n, where n is the size of the dataset.
the global threshold. In other words, the model cannot consistently make significant progress on batches. This is surprising because even though the model is underparameterized with respect to the entire dataset, it is extremely overparameterized with respect to the batch. We verify this observation experimentally (Appendix B). This holds for a single GD step, but what if we were to allow many GD steps per batch, would this mean that we still cannot make significant progress on the batch? This leads us to consider the role of randomness in (non-stochastic) gradient descent.
It is well known that overparameterized models trained using SGD can perfectly fit datasets with random labels (Zhang et al., 2017). It is also known that when models are sufficiently overparameterized (and wide) GD with random initialization convergences to a near global minimum (Du et al., 2019). This leads to an interesting question: how much randomness does GD require to escape local minima efficiently (in polynomial time)? It is obvious that without randomness we could initialize GD next to a local minimum, and it will never escape it. However, what about the case where we are provided an adversarial input and we can perturb that input (for example, by adding a random vector to it), how many bits of randomness are required to guarantee that after the perturbation GD achieves good accuracy on the input in polynomial time?
In Section 4 we show that if the amount of randomness is sublinear in the size of the dataset, then for any differentiable and L-smooth model class (e.g., a neural network architecture), there are datasets that require an exponential running time to achieve any non-trivial accuracy (i.e., better than 1/2 + o(1) for a two-class classification task), even if the model is extremely overparameterized. This result highlights the importance of randomness for the convergence of gradient methods. Specifically, it provides an indication of why SGD converges in certain situations and GD does not. We hope this result opens the door to the design of randomness in other versions of GD.
Outline of our techniques We consider batch SGD, where the dataset is shuffled once at the beginning of each epoch and then divided into batches. We do not deal with the generalization abilities of the model. Thus, the dataset is always the training set. In each epoch, the algorithm goes over the batches one by one, and performs gradient descent to update the model. This is the "vanilla" version of SGD, without any acceleration or regularization (for a formal definition, see Section 2). For the sake of analysis, we add a termination condition after every GD step: if the accuracy on the entire dataset is 100% we terminate. Thus, in our case, termination implies 100% accuracy.
To achieve our results, we make use of entropy compression, first considered by Moser and Tardos (2010) to prove a constructive version of the Lovász local lemma. Roughly speaking, the entropy compression argument allows one to bound the running time of a randomized algorithm2 by leveraging the fact that a random string of bits (the randomness used by the algorithm) is computationally incompressible (has high Kolmogorov complexity) w.h.p. If one can show that throughout the execution of the algorithm, it (implicitly) compresses the randomness it uses, then one can bound the number of iterations the algorithm may execute without terminating. To show that the algorithm has such a property, one would usually consider the algorithm after executing t iterations, and would try to show that just by looking at an "execution log" of the algorithm and some set of "hints", whose size together is considerably smaller than the number of random bits used by the algorithm, it is possible to reconstruct all of the random bits used by the algorithm.
We apply this approach to SGD with an added termination condition when the accuracy over the entire dataset is 100%. Thus, termination in our case guarantees perfect accuracy. The randomness we compress is the bits required to represent the random permutation of the data at every epoch. So indeed the longer SGD executes, the more random bits are generated. We show that under our assumptions it is possible to reconstruct these bits efficiently starting from the dataset X and the model after executing t epochs. The first step in allowing us to reconstruct the random bits of the permutation in each epoch is to show that under the L-smoothness assumption and a sufficiently small step size, SGD is reversible. That is, if we are given a model Wi+1 and a batch Bi such that Wi+1 results from taking a gradient step with model Wi where the loss is calculated with respect to Bi, then we can uniquely retrieve Wi using only Bi and Wi+1. This means that if we can efficiently encode the batches used in every epoch (i.e., using less bits than encoding the entire permutation of the data), we can also retrieve all intermediate models in that epoch (at no additional cost). We prove this claim in Section 2.
2We require that the number of the random bits used is proportional to the execution time of the algorithm. That is, the algorithm flips coins for every iteration of a loop, rather than just a constant number at the beginning of the execution.
The crux of this paper is to show that when the accuracy discrepancy is high for a certain epoch, the batches can indeed be compressed. To exemplify our techniques let us consider the scenario where, in every epoch, just after a single GD step on a batch we consistently achieve perfect accuracy on the batch. Let us consider some epoch of our execution, assume we have access to X , and let Wf be the model at the end of the epoch. If the algorithm did not terminate, then Wf has accuracy at most 1− on the entire dataset (assume for simplicity that is a constant). Our goal is to retrieve the last batch of the epoch, Bf ⊂ X (without knowing the permutation of the data for the epoch). A naive approach would be to simply encode the indices in X of the elements in the batch. However, we can use Wf to achieve a more efficient encoding. Specifically, we know that Wf achieves 1.0 accuracy on Bf but only 1− accuracy on X . Thus it is sufficient to encode the elements of Bf using a smaller subset of X (the elements classified correctly by Wf , which has size at most (1− ) |X|). This allows us to significantly compress Bf . Next, we can use Bf and Wf together with the reversibility of SGD to retrieve Wf−1. We can now repeat the above argument to compress Bf−1 and so on, until we are able to reconstruct all of the random bits used to generate the permutation of X in the epoch. This will result in a linear reduction in the number of bits required for the encoding.
In our analysis, we show a generalized version of the scenario above. We show that high accuracy discrepancy implies that entropy compression occurs. For our second result, we consider a modified SGD algorithm that instead of performing a single GD step per batch, first perturbs the batch with a limited amount of randomness and then performs GD until a desired accuracy on the batch is reached. We assume towards contradiction that GD can always reach the desired accuracy on the batch in subexponential time. This forces the accuracy discrepancy to be high, which guarantees that we always find a model with good accuracy. Applying this reasoning to models of sublinear size and data with random labels we arrive at a contradiction, as such models cannot achieve good accuracy on the data. This implies that when we limit the amount of randomness GD can use for perturbations, there must exist instances where GD requires exponential running time to achieve good accuracy.
Related work There has been a long line of research proving convergence bounds for SGD under various simplifying assumptions such as: linear networks (Arora et al., 2019; 2018), shallow networks (Safran and Shamir, 2018; Du and Lee, 2018; Oymak and Soltanolkotabi, 2019), etc. However, the most general results are the ones dealing with deep, overparameterized networks (Du et al., 2019; Allen-Zhu et al., 2019; Zou et al., 2018; Zou and Gu, 2019). All of these works make use of NTK (Neural Tangent Kernel)(Jacot et al., 2018) and show global convergence guarantees for SGD when the hidden layers have width at least poly(n,L) where n is the size of the dataset and L is the depth of the network. We note that the exponents of the polynomials are quite large.
A recent line of work by Zhang et al. (2022) notes that in many real world scenarios models do not converge to stationary points. They instead take a different approach which, similar to us, studies the dynamics of neural networks. They show that under certain assumptions (e.g., considering a fully connected architecture with sub-differentiable and coordinate-wise Lipschitz activations and weights laying on a compact set) the change in training loss gradually converges to 0, even if the full gradient norms do not vanish.
In (Du et al., 2017) it was shown that GD can take exponential time to escape saddle points, even under random initialization. They provide a highly engineered instance, while our results hold for many model classes of interest. Jin et al. (2017) show that adding perturbations during the executions of GD guarantees that it escapes saddle points. This is done by occasionally perturbing the parameters within a ball of radius r, where r depends on the properties of the function to be optimized. Therefore, a single perturbation must require an amount of randomness linear in the number of parameters.
2 PRELIMINARIES
We consider the following optimization problem. We are given an input (dataset) of size n. Let us denote X = {xi}ni=1 (Our inputs contain both data and labels, we do not need to distinguish them for this work). We also associate every x ∈ X with a unique id of dlog ne bits. We often consider batches of the input B ⊂ X . The size of the batch is denoted by b (all batches have the same size). We have some model whose parameters are denoted by W ∈ Rd, where d is the model dimension. We aim to optimize a goal function of the following type: f(W ) = 1n ∑ x∈X fx(W ), where the functions fx : Rd → R are completely determined by x ∈ X . We also define for every set A ⊆ X: fA(W ) = 1 |A| ∑ x∈A fx(W ). Note that fX = f .
We denote by acc(W,A) : Rd × 2X → [0, 1] the accuracy of model W on the set A ⊆ X (where we use W to classify elements from X). Note that for x ∈ X it holds that acc(W,x) is a binary value indicating whether x is classified correctly or not. We require that every fx is differentiable and L-smooth: ∀W1,W2 ∈ Rd, ‖∇fx(W1)−∇fx(W2)‖ ≤ L‖W1 −W2‖. This implies that every fA is also differentiable and L-smooth. To see this consider the following:
‖∇fA(W1)−∇fA(W2)‖ = ‖ 1 |A| ∑ x∈A ∇fx(W1)− 1 |A| ∑ x∈A ∇fx(W2)‖
= 1 |A| ‖ ∑ x∈A ∇fx(W1)−∇fx(W2)‖ ≤ 1 |A| ∑ x∈A ‖∇fx(W1)−∇fx(W2)‖ ≤ L‖W1 −W2‖
We state another useful property of fA:
Lemma 2.1. Let W1,W2 ∈ Rd and α < 1/L. For any A ⊆ X , if it holds that W1 − α∇fA(W1) = W2 − α∇fA(W2) then W1 = W2.
Proof. Rearranging the terms we get thatW1−W2 = α∇fA(W1)−α∇fA(W2). Now let us consider the norm of both sides: ‖W1−W2‖ = ‖α∇fA(W1)−α∇fA(W2)‖ ≤ α·L‖W1−W2‖ < ‖W1−W2‖ Unless W1 = W2, the final strict inequality holds which leads to a contradiction.
The above means that for a sufficiently small gradient step, the gradient descent process is reversible. That is, we can always recover the previous model parameters given the current ones, assuming that the batch is fixed. We use the notion of reversibility throughout this paper. However, in practice we only have finite precision, thus instead of R we work with the finite set F ⊂ R. Furthermore, due to numerical stability issues, we do not have access to exact gradients, but only to approximate values ∇̂fA. For the rest of this paper, we assume these values are L-smooth on all elements in Fd. That is,
∀W1,W2 ∈ Fd, A ⊆ X, ‖∇̂fA(W1)− ∇̂fA(W2)‖ ≤ L‖W1 −W2‖
This immediately implies that Lemma 2.1 holds even when precision is limited. Let us state the following theorem:
Theorem 2.2. Let W1,W2, ...,Wk ∈ Fd ⊂ Rd, A1, A2, ..., Ak ⊆ X and α < 1/L. If it holds that Wi = Wi−1 − α∇̂fAi−1(Wi−1), then given A1, A2, ..., Ak−1 and Wk we can retrieve W1.
Proof. Given Wk we iterate over all W ∈ Fd until we find W such that Wk = W − α∇̂fAi−1(W ). Using Lemma 2.1, there is only a single element such that this equality holds, and thus W = Wk−1. We repeat this process until we retrieve W1.
SGD We analyze the classic SGD algorithm presented in Algorithm 1. One difference to note in our algorithm, compared to the standard implementation, is the termination condition when the accuracy on the dataset is 100%. In practice the termination condition is not used, however, we only use it to prove that at some point in time the accuracy of the model is 100%.
Algorithm 1: SGD 1 i← 1 // epoch counter 2 W1,1 is an initial model 3 while True do 4 Take a random permutation of X , divided into batches {Bi,j}n/bj=1 5 for j from 1 to n/b do 6 if acc(Wi,j , X) = 1 then Return Wi,j 7 Wi,j+1 ←Wi,j − α∇fBi,j (Wi,j) 8 i← i+ 1, Wi,1 ←Wi−1,n/b+1
Kolmogorov complexity The Kolmogorov complexity of a string x ∈ {0, 1}∗, denoted by K(x), is defined as the size of the smallest prefix Turing machine which outputs this string. We note that this definition depends on which encoding of Turing machines we use. However, one can show that this will only change the Kolmogorov complexity by a constant factor (Li and Vitányi, 2019).
We also use the notion of conditional Kolmogorov complexity, denoted by K(x | y). This is the length of the shortest prefix Turing machine which gets y as an auxiliary input and prints x. Note that the length of y does not count towards the size of the machine which outputs x. So it can be the case that |x| |y| but it holds that K(x | y) < K(x). We can also consider the Kolmogorov complexity of functions. Let g : {0, 1}∗ → {0, 1}∗ then K(g) is the size of the smallest Turing machine which computes the function g.
The following properties of Kolmogorov complexity will be of use. Let x, y, z be three strings:
• Extra information: K(x | y, z) ≤ K(x | z) +O(1) ≤ K(x, y | z) +O(1) • Subadditivity: K(xy | z) ≤ K(x | z, y)+K(y | z)+O(1) ≤ K(x | z)+K(y | z)+O(1)
Random strings have the following useful property (Li and Vitányi, 2019): Theorem 2.3. For an n bit string x chosen uniformly at random, and some string y independent of x (i.e., y is fixed before x is chosen) and any c ∈ N it holds that Pr[K(x | y) ≥ n− c] ≥ 1− 1/2c.
Entropy and KL-divergence Our proofs make extensive use of binary entropy and KL-divergence. In what follows we define these concepts and provide some useful properties.
Entropy: For p ∈ [0, 1] we denote by h(p) = −p log p− (1− p) log(1− p) the entropy of p. Note that h(0) = h(1) = 0.
KL-divergence: For p, q ∈ (0, 1) let DKL(p ‖ q) = p log pq + (1 − p) log 1−p 1−q be the Kullback Leibler divergence (KL-divergence) between two Bernoulli distributions with parameters p, q. We also extend the above for the case where q, p ∈ {0, 1} as follows: DKL(1 ‖ q) = DKL(0 ‖ q) = 0, DKL(p ‖ 1) = log(1/p), DKL(p ‖ 0) = log(1/(1 − p)). This is just notation that agrees with Lemma 2.4. We also state the following result of Pinsker’s inequality applied to Bernoulli random variables: DKL(p ‖ q) ≥ 2(p− q)2. Representing sets Let us state some useful bounds on the Kolmogorov complexity of sets. A more detailed explanation regarding the Kolmogorov complexity of sets and permutations together with the proof to the lemma below appears in Appendix A. Lemma 2.4. Let A ⊆ B, |B| = m, |A| = γm, and let g : B → {0, 1}. For any set Y ⊆ B let Y1 = {x | x ∈ Y, g(x) = 1} , Y0 = Y \ Y1 and κY = |Y1||Y | . It holds that
K(A | B, g) ≤ mγ(log(e/γ)−DKL(κB ‖ κA)) +O(logm)
3 ACCURACY DISCREPANCY
First, let us define some useful notation (Wi,j , Bi,j are formally defined in Algorithm 1):
• λi,j = acc(Wi,j , X). This is the accuracy of the model in epoch i on the entire dataset X , before performing the GD step on batch j.
• ϕi,j = acc(Wi,j , Bi,j−1). This is the accuracy of the model on the (j − 1)-th batch in the i-th epoch after performing the GD step on the batch.
• Xi,j = ⋃j k=1Bi,k (note that ∀i,Xi,0 = ∅, Xi,n/b = X). This is the set of elements in the first j
batches of epoch i. Let us also denote nj = |Xi,j | = jb (Note that ∀j, i1, i2, |Xi1,j | = |Xi2,j |, thus i need not appear in the subscript).
• λ′i,j = acc(Wi,j , Xi,j−1), λ ′′ i,j = acc(Wi,j , X \Xi,j−1), where λ′i,j is the accuracy of the model
on the set of all previously seen batch elements, after performing the GD step on the (j − 1)-th batch and λ′′i,j is the accuracy of the same model, on all remaining elements (j-th batch onward). To avoid computing the accuracy on empty sets, λ′i,j is defined for j ∈ [2, n/b + 1] and λ′′i,j is defined for j ∈ [1, n/b]. • ρi,j = DKL(λ′i,j ‖ ϕi,j) is the accuracy discrepancy for the j-th batch in iteration i and ρi =∑n/b+1 j=2 ρi,j is the accuracy discrepancy at iteration i.
In our analysis, we consider t epochs of the SGD algorithm. Our goal for this section is to derive a connection between ∑t i=1 ρi and t. Bounding t: Our goal is to use the entropy compression argument to show that if ∑t i=1 ρi is sufficiently large we can bound t. Let us start by formally defining the random bits which the algorithm uses. Let ri be the string of random bits representing the random permutation of X at epoch i. As we consider t epochs, let r = r1r2 . . . rt.
Note that the number of bits required to represent an arbitrary permutation of [n] is given by: dlog(n!)e = n log n− n log e+O(log n) = n log(n/e) +O(log n),
where in the above we used Stirling’s approximation. Thus, it holds that |r| = t(n log(n/e) + O(log n)) and according to Theorem 2.3, with probability at least 1 − 1/n2 it holds that K(r) ≥ tn log(n/e)−O(log n). In the following lemma we show how to use the model at every iteration to efficiently reconstruct the batch at that iteration, where the efficiency of reconstruction is expressed via ρi. Lemma 3.1. It holds w.h.p that ∀i ∈ [t] that: K(ri |Wi+1,1, X) ≤ n log ne − bρi + n b ·O(log n)
Proof. Recall that Bi,j is the j-th batch in the i-th epoch, and let Pi,j be a permutation of Bi,j such that the order of the elements in Bi,j under Pi,j is the same as under ri. Note that given X , if we know the partition into batches and all permutations, we can reconstruct ri. According to Theorem 2.2, given Wi,j and Bi,j−1 we can compute Wi,j−1. Let us denote by Y the encoding of this procedure. To implement Y we need to iterate over all possible vectors in Fd and over batch elements to compute the gradients. To express this program we require auxiliary variables of size at most O(log min {d, b}) = O(log n). Thus it holds that K(Y ) = O(log n). Let us abbreviate Bi,1, Bi,2, ..., Bi,j as (Bi,k) j k=1. We write the following. K(ri | X,Wi+1,1) ≤ K(ri, Y | X,Wi+1,1) +O(1) ≤ K(ri | X,Wi+1,1, Y ) +K(Y | X,Wi+1,1) +O(1)
≤ O(log n) +K((Bi,k, Pi,k)n/bk=1 | X,Wi+1,1, Y )
≤ O(log n) +K((Bi,k)n/bk=1 | X,Wi+1,1, Y ) +K((Pi,k) n/b k=1 | X,Wi+1,1, Y )
≤ O(log n) +K((Bi,k)n/bk=1 | X,Wi+1,1, Y ) + n/b∑ j=1 K(Pi,j)
Let us bound K((Bi,k) n/b k=1 | X,Wi+1,1, Y ) by repeatedly using the subadditivity and extra information properties of Kolmogorov complexity.
K((Bi,k) n/b k=1 | X,Y,Wi+1,1) ≤ K(Bi,n/b | X,Wi+1,1) +K((Bi,k) n/b−1 k=1 | X,Y,Wi+1,1, Bi,n/b) +O(1)
≤ K(Bi,n/b | X,Wi+1,1) +K((Bi,k) n/b−1 k=1 | X,Y,Wi,n/b, Bi,n/b) +O(1) ≤ K(Bi,n/b | X,Wi+1,1) +K(Bi,n/b−1 | X,Wi,n/b, Bi,n/b)
+K((Bi,k) n/b−2 k=1 | X,Y,Wi,n/b−1, Bi,n/b, Bi,n/b−1) +O(1)
≤ ... ≤ O(n b ) + n/b∑ j=1 K(Bi,j | X,Wi,j+1, (Bi,k)n/bk=j+1) ≤ O( n b ) + n/b∑ j=1 K(Bi,j | Xi,j ,Wi,j+1)
where in the transitions we used the fact that given Wi,j , Bi,j−1 and Y we can retrieve Wi,j−1. That is, we can always bound K(... | Y,Wi,j , Bi,j−1, ...) by K(... | Y,Wi,j−1, Bi,j−1, ...) +O(1). To encode the order Pi,j inside each batch, b log(b/e) +O(log b) bits are sufficient. Finally we get that: K(ri | X,Wi+1,1) ≤ O(nb ) + ∑n/b j=1[K(Bi,j | Xi,j ,Wi,j+1) + b log(b/e) +O(log b)].
Let us now boundK(Bi,j−1 | Xi,j−1,Wi,j). KnowingXi,j−1 we know thatBi,j−1 ⊆ Xi,j−1. Thus we need to use Wi,j to compress Bi,j−1. Applying Lemma 2.4 with parameters A = Bi,j−1, B = Xi,j−1, γ = b/nj−1, κA = ϕi,j , κB = λ ′ i,j and g(x) = acc(Wi,j , x). We get the following:
K(Bi,j−1 | Xi,j−1,Wi,j) ≤ b(log( e · nj−1
b )− ρi,j) +O(log nj−1)
Adding b log(b/e) +O(log b) to the above, we get the following bound on every element in the sum: b(log( e · nj−1
b )− ρi,j) + b log(b/e) +O(log b) +O(log nj−1) ≤ b log nj−1 − bρi,j +O(log nj−1)
Note that the most important term in the sum is −bρi,j . That is, the more the accuracy of Wi,j on the batch, Bi,j−1, differs from the accuracy of Wi,j on the set of elements containing the batch, Xi,j−1, we can represent the batch more efficiently. Let us now bound the sum: ∑n/b+1 j=2 [b log nj−1 − bρi,j + O(log nj−1)]. Let us first bound the sum over b log nj−1: n/b+1∑ j=2 b log nj−1 = n/b∑ j=1 b log jb = n/b∑ j=1 b(log b+ log j)
= n log b+ b log(n/b)! = n log b+ n log n
b · e +O(log n) = n log
n e +O(log n)
Finally, we can write that:
K(ri | X,Wi+1,1) ≤ O( n
b ) + n/b+1∑ j=2 [b log nj−1 − bρi,j +O(log n)] ≤ n log n e − bρi + n b ·O(log n)
Using the above we know that when the value ρi is sufficiently high, the random permutation of the epoch can be compressed. We use the fact that random strings are incompressible to bound 1 t ∑t i=1 ρi. Theorem 3.2. If the algorithm does not terminate by the t-th iteration, then it holds w.h.p that ∀t, 1t ∑t i=1 ρi ≤ O( n logn b2 ).
Proof. Using arguments similar to Lemma 3.1, we can show that K(r,W1,1 | X) ≤ K(Wt+1,1) + O(t)+ ∑t k=1K(rk | X,Wk+1,1) (formally proved in Lemma A.3). Combining this with Lemma 3.1, we get that K(r,W1,1 | X) ≤ K(Wt+1,1) + t[n(log(n/e) + n·O(logn)b − bρi +O(log n)]. Our proof implies that we can reconstruct not only r, but also W1,1 using X,Wt+1,1. Due to the incompressibility of random strings, we get that w.h.pK(r,W1,1 | X) ≥ d+tn log(n/e)−O(log n). Combining the lower and upper bound for K(r,W1,1 | X) we can get the following inequality:
d+ tn log(n/e)−O(log n) ≤ d+ t[n(log(n/e) + n ·O(log n) b +O(log n)]− t∑ i=1 bρi (1)
=⇒ 1 t t∑ i=1 ρi ≤ n ·O(log n) b2 + O(log n)
b︸ ︷︷ ︸ β(n,b)
+ O(log n)
bt = O(
n log n
b2 )
Let β(n, b) be the exact value of the asymptotic expression in Inequality 1. Theorem 3.2 says that as long as SGD does not terminate the average accuracy discrepeancy cannot be too high. Using the contra-positive we get the following useful corollary (proof is deferred to Appendix A.3).
Corollary 3.3. If ∀k, 1k ∑k i=1 ρi > β(n, b) + γ, for γ = Ω(b
−1 log n), then w.h.p SGD terminates within O(1) epochs.
The case for weak models Using the above we can also derive some interesting negative results when the model is not expressive enough to get perfect accuracy on the data. It must be the case that the average accuracy discrepancy tends below β(n, b) over time. We verify this experimentally on the MNIST dataset (Appendix B), showing that the average accuracy indeed drops over time when the model is weak compared to the dataset. We also confirm that the dependence of the threshold in b is indeed inversely quadratic.
4 THE ROLE OF RANDOMNESS IN GD INITIALIZATION
Our goal for this section is to show that when the amount of randomness in the perturbation is too small, for any model architecture which is differentiable and L-smooth there are inputs for which Algorithm 2 requires exponential time to terminate, even for extremely overparameterized models.
Perturbation families Let us consider a family of 2` functions indexed by length ` real valued vectors Ψ` = {ψz}z∈R` . Recall that throughout this paper we assume finite precision, thus every z can be represented using O(`) bits. We say that Ψ` is a reversible perturbation family if it holds that ∀z ∈ R`, ψz is one-to-one. We often use the notation Ψ`(W ), which means pick z ∈ R` uniformly at random, and apply ψz(W ). We often refer to Ψ` as simply a perturbation.
We note that the above captures a wide range of natural perturbations. For example ψz(W ) = W+Wz where Wz[i] = z[i mod `]. Clearly ψz(W ) is reversible.
Gradient descent The GD algorithm we analyze is formally given in Algorithm 2.
Algorithm 2: GD(W,Y, δ) Input: initial model W , dataset Y , desired accuracy δ 1 i = 1, T = o(2m) + poly(d) 2 W = Ψ`(W ) 3 while acc(W,Y ) < δ and i < T do 4 W ←W − α∇fY (W ) 5 i← i+ 1 6 Return W
Let us denote by m the number of elements in Y . We make the following 2 assumptions for the rest of this section: (1) ` = o(m). (2) There exists T = o(2m) + poly(d) and a perturbation family Ψ` such that for every input W,Y within T iterations GD terminates and returns a solution that has at least δ accuracy on Y with constant probability. We show that the above two assumptions cannot hold together. That is, if the amount of randomness is sublinear in m, there must be instances with exponential running time, even when d m. To show the above, we define a variant of SGD, which uses GD as a sub procedure (Algorithm 3). Assume that our data set is a binary classification task (it is easy to generalize our results to any number of classes), and that elements in X are assigned random labels. Furthermore, let us assume that d = o(n), e.g., d = n0.99. It holds that w.h.p we cannot train a model with d parameters that achieves any accuracy better than 1/2 + o(1) on X (Lemma A.4). Let us take to be a small constant. We show that if assumptions 1 and 2 hold, then Algorithm 3 must terminate and return a model with 1/2 + Θ(1) accuracy on X , leading to a contradiction. Our analysis follows the same line as the previous section, and uses the same notation.
Reversibility First, we must show that Algorithm 3 is still reversible. Note that we can take the same approach as before, where the only difference is that in order to get Wi,j from Wi,j+1 we must now get all the intermediate values from the call to GD. As the GD steps are applied to the same batch, this amounts to applying Lemma 2.1 several times instead of once per iteration. More specifically, we must encode for every batch a number Ti,j = o(2b) + poly(d) = o(2b) + poly(n) (recall that d = o(n)) and apply Lemma 2.1 Ti,j times.
This results in ψz(Wi,j). If we know z,Ψ` then we can retrieve ψz and efficiently retrieve Wi,j using only O(log d) = O(log n) additional bits (by iterating over all values in Fd). Therefore, in every
Algorithm 3: SGD’ 1 i← 1 // epoch counter 2 W1,1 is an initial model 3 while True do 4 Take a random permutation of X , divided into batches {Bi,j}n/bj=1 5 for j from 1 to n/b do 6 if acc(Wi,j , X) ≥ 1/2(1− ) then Return Wi,j 7 Wi,j+1 ← GD(Wi,j , Bi,j , 12(1−2 ) ) 8 i← i+ 1, Wi,1 ←Wi−1,n/b+1
iteration we have the following additional terms: log T +O(log n) + ` = o(b) +O(log n). Summing over n/b iterations we get o(n) per epoch. We state the following Lemma analogous to Lemma 3.1. Lemma 4.1. For Algorithm 3 it holds w.h.p that ∀i ∈ [t] that: K(ri |Wi+1,1, X,Ψ`) ≤ n log ne − bρi + β(n, b) + o(n).
We show that under our assumptions, Algorithm 3 must terminate, leading to a contradiction. Lemma 4.2. Algorithm 3 with b = Ω(log n) terminates within O(T ) iterations w.h.p.
Proof. Our goal is to lower bound ρi = ∑n/b+1 j=2 DKL(λ ′ i,j ‖ ϕi,j). Let us first upper bound λ′i,j . Using the fact that λ′i,j ≤ nλi,j (j−1)b (Lemma A.5) combined with the fact that λi,j ≤ 1/2(1− ) as long as the algorithm does not terminate, we get that ∀j ∈ [2, n/b+ 1] it holds that λ′i,j ≤ n2(1− )(j−1)b . Using the above we conclude that as long as we do not terminate it must hold that λ′i,j ≤ 12(1− )2 whenever j ∈ I = [(1− )n/b+ 1, n/b+ 1]. That is, λ′i,j must be close to λi,j towards the end of the epoch, and therefore must be sufficiently small. Note that |I| ≥ n/b. We know that as long as the algorithm does not terminate it holds that ϕi,j > 1/2(1− 2 ) with some constant probability. Furthermore, this probability is taken over the randomness used in the call to GD (the randomness of the perturbation). This fact allows us to use Hoeffding-type bounds for the ϕi,j variables. If ϕi,j > 1/2(1− 2 ) we say that it is good. Therefore in expectation a constant fraction of ϕi,j , j ∈ I are good. Applying a Hoeffding type bound we get that w.h.p a constant fraction of ϕi,j , j ∈ I are good. Denote these good indices by Ig ⊆ I . We are now ready to bound ρi.
ρi = n/b+1∑ j=2 DKL(λ ′ i,j ‖ ϕi,j) ≥ ∑ j∈Ig DKL(λ ′ i,j ‖ ϕi,j) ≥ ∑ j∈Ig DKL( 1 2(1− )2 ‖ 1 2(1− 2 ) )
≥ Θ(n b ) · ( 1 2(1− 2 ) − 1 2(1− )2 )2 = Θ( n b ) · 5 = Θ(n b )
Where in the transitions we used the fact that KL-divergence is non-negative, and Pinsker’s inequality. Finally, requiring that b = Ω(log n) we get that bρi−β(n, b)− o(n) = Θ(n)−Θ(n lognlog2 n )− o(n) = Θ(n). Following the same calculation as in Corollary 3.3, this guarantees termination within O( lognn ) epochs, or O(T · nb · logn n ) = O(T ) iterations (gradient descent steps).
The above leads to a contradiction. It is critical to note that the above does not hold if T = 2m = 2b or if ` = Θ(n), as both would imply that the o(n) term becomes Θ(n). We state our main theorem: Theorem 4.3. For any differentiable and L-smooth model class with d parameters and a perturbation class Ψ` such that ` = o(m) there exist an input data set Y of size m such that GD requires Ω(2m) iterations to achieve δ accuracy on Y , even if δ = 1/2 + Θ(1) and d m.
A OMITTED PROOFS AND EXPLENATIONS
A.1 REPRESENTING SETS AND PERMUTATIONS
Throughout this paper, we often consider the value K(A) where A is a set. Here the program computing A need only output the elements of A (in any order). When considering K(A | B) such that A ⊆ B, it holds that K(A | B) ≤ dlog (|B| |A| ) e+O(log |B|). To see why, consider Algorithm 4. In the algorithm iA is the index of A when considering some ordering of all subsets of B of size |A|. Thus dlog (|B| |A| ) e bits are sufficient to represent iA. The remaining variables i,mA,mB and any
Algorithm 4: Compute A given B as input 1 mA ← |A| ,mB ← |B| , i← 0, iA is a target index 2 for every subset C ⊆ B s.t |C| = mA (in a predetermined order) do 3 if i = iA then Print C 4 i← i+ 1
additional variables required to construct the set C are all of size at most O(log |B|) and there is at most a constant number of them.
During our analysis, we often bound the Kolmogorov complexity of tuples of objects. For example, K(A,P | B) where A ⊆ B is a set and P : A → [|A|] is a permutation of A (note that A,P together form an ordered tuple of the elements of A). Instead of explicitly presenting a program such as Algorithm 4, we say that if K(A | B) ≤ c1 and c2 bits are sufficient to represent P , thus K(A,P | B) ≤ c1 + c2 +O(1). This just means that we directly have a variable encoding P into the program that computes A given B and uses it in the code. For example, we can add a permutation to Algorithm 4 and output an ordered tuple of elements rather than a set. Note that when representing a permutation of A, |A| = k, instead of using functions, we can just talk about values in dlog k!e. That is, we can decide on some predetermined ordering of all permutations of k elements, and represent a permutation as its number in this ordering.
A.2 OMITTED PROOFS FOR SECTION 2
Lemma A.1. For p ∈ [0, 1] it holds that h(p) ≤ p log(e/p).
Proof. Let us write our lemma as:
h(p) = −p log p− (1− p) log(1− p) ≤ p log(e/p)
Rearranging we get:
− (1− p) log(1− p) ≤ p log p+ p log(1/p) + p log e =⇒ −(1− p) log(1− p) ≤ p log e
=⇒ − ln(1− p) ≤ p (1− p)
Note that − ln(1− p) = ∫ p 0 1 (1−x)dx ≤ p · 1 (1−p) . Where in the final transition we use the fact that
1 (1−x) is monotonically increasing on [0, 1]. This completes the proof.
Lemma A.2. For p, γ, q ∈ [0, 1] where pγ ≤ q, (1− p)γ ≤ (1− q) it holds that
qh( pγ q ) + (1− q)h( (1− p)γ (1− q) ) ≤ h(γ)− γDKL(p ‖ q)
Proof. Let us expand the left hand side using the definition of entropy:
qh( pγ q ) + (1− q)h( (1− p)γ (1− q) )
= −q(pγ q log pγ q + (1− pγ q ) log(1− pγ q ))
− (1− q)( (1− p)γ (1− q) log (1− p)γ (1− q) + (1− (1− p)γ (1− q) ) log(1− (1− p)γ (1− q) ))
= −(pγ log pγ q + (q − pγ) log q − pγ q )
− ((1− p)γ log (1− p)γ (1− q) + ((1− q)− (1− p)γ) log (1− q)− (1− p)γ 1− q )
= −γ log γ − γDKL(p ‖ q)
− (q − pγ)(log q − pγ q )− ((1− q)− (1− p)γ) log (1− q)− (1− p)γ 1− q )
Where in the last equality we simply sum the first terms on both lines. To complete the proof we use the log-sum inequality for the last expression. The log-sum inequality states that: Let {ak}mk=1 , {bk} m k=1 be non-negative numbers and let a = ∑m k=1 ak, b = ∑m k=1 bk, then ∑m k=1 ai log ai bi ≥ a log ab . We apply the log-sum inequality with m = 2, a1 = q − pγ, a2 = (1 − q) − (1 − p)γ, a = 1 − γ and b1 = q, b2 = 1− q, b = 1, getting that:
(q − pγ)(log q − pγ q ) + ((1− q)− (1− p)γ) log (1− q)− (1− p)γ 1− q ) ≥ (1− γ) log(1− γ)
Putting everything together we get that
− γ log γ − γDKL(p ‖ q)
− (q − pγ)(log q − pγ q )− ((1− q)− (1− p)γ) log (1− q)− (1− p)γ 1− q )
≤ −γ log γ − (1− γ) log(1− γ)− γDKL(p ‖ q) = h(γ)− γDKL(p ‖ q)
Lemma 2.4. Let A ⊆ B, |B| = m, |A| = γm, and let g : B → {0, 1}. For any set Y ⊆ B let Y1 = {x | x ∈ Y, g(x) = 1} , Y0 = Y \ Y1 and κY = |Y1||Y | . It holds that
K(A | B, g) ≤ mγ(log(e/γ)−DKL(κB ‖ κA)) +O(logm)
Proof. The algorithm is very similar to Algorithm 4, the main difference is that we must first compute B1, B0 from B using g, and select A1, A0 from B1, B0, respectively, using two indices iA1 , iA0 . Finally we print A = A1 ∪A0. We can now bound the number of bits required to represent iA1 , iA0 . Note that |B1| = κBm, |B0| = (1− κB)m. Note that for A1 we pick γκAm elements from κBm elements and for A0 we pick γ(1− κA)m elements from (1− κB)m elements. The number of bits required to represent this selection is:
dlog ( κBm
γκAm
) e+ dlog ( (1− κB)m γ(1− κA)m ) e ≤ κBmh( γκA κB ) + (1− κB)mh( γ(1− κA) (1− κB) )
≤ m(h(γ)− γDKL(κB ‖ κA)) ≤ mγ(log(e/γ)−DKL(κB ‖ κA)) Where in the first inequality we used the fact that ∀0 ≤ k ≤ n, log ( n k ) ≤ nh(k/n), Lemma A.2 in the second transition, and Lemma A.1 in the third transition. Note that when κA = 0, 1 We only have one term of the initial sum. For example, for κA = 1 we get:
dlog ( κBm
γκAm
) e = dlog ( κBm
γm
) e ≤ κBmh( γ
κB )
≤ mγ log(eκB/γ) = mγ(log(e/γ)− log(1/κB))
And similar computation yields mγ(log(e/γ)− log(1/(1−κB))) for κA = 0. Finally, the additional O(logm) factor is due to various counters and variables, similarly to Algorithm 4.
A.3 OMITTED PROOFS FOR SECTION 3
Lemma A.3. It holds that K(r,W1,1 | X) ≤ K(Wt+1,1) +O(t) + ∑t k=1K(rk | X,Wk+1,1).
Proof. Similarly to the definition of Y in Lemma 3.1, let Y ′ be the program which receives X, ri,Wi+1,1 as input and repeatedly applies Theorem 2.2 to retrieve Wi,1. As Y ′ just needs to reconstruct all batches from X, ri and call Y for n/b times, it holds that K(Y ′) = O(log n). Using the subadditivity and extra information properties of K(), together with the fact that W1,1 can be reconstructed given X,Wt+1,1, Y ′, we write the following:
K(r | X) ≤ K(r,W1,1, Y ′,Wt+1,1 | X) +O(1) ≤ K(W1,1,,Wt+1,1, Y ′ | X) +K(r | X,Y ′,Wt+1,1) +O(1) ≤ K(Wt+1,1 | X) +K(r | X,Y ′,Wt+1,1) +O(log n)
First, we note that: ∀i ∈ [t − 1],K(ri | X,Y ′,Wi+2,1, ri+1) ≤ K(ri | X,Y ′,Wi+1,1) + O(1). Where in the last inequality we simply execute Y ′ on X,Wi+2,1, ri+1 to get Wi+1,1. Let us write:
K(r1r2 . . . rt | X,Y ′,Wt+1,1) ≤ K(rt | X,Y ′,Wt+1,1) +K(r1r2 . . . rt−1 | X,Y ′,Wt+1,1, rt) +O(1) ≤ K(rt | X,Wt+1,1) +K(r1r2 . . . rt−1 | X,Y ′,Wt,1) +O(1) ≤ K(rt | X,Wt+1,1) +K(rt−1 | X,Wt,1) +K(r1r2 . . . rt−2 | X,Y ′,Wt−1,1) +O(1)
≤ · · · ≤ O(t) + t∑
k=1
K(rk | X,Wk+1,1)
Combining everything together we get that:
K(r | X) ≤ K(Wt+1,1) +O(t) + t∑
k=1
K(rk | X,Wk+1,1)
Corollary 3.3. If ∀k, 1k ∑k i=1 ρi > β(n, b) + γ, for γ = Ω(b
−1 log n), then w.h.p SGD terminates within O(1) epochs.
Proof. Let us simplify Inequality 1.
d+ tn log(n/e)−O(log n) ≤ d+ t[n(log(n/e) + n ·O(log n) b +O(log n)]− t∑ i=1 bρi
=⇒ −O(log n) ≤ t[n ·O(log n) b +O(log n)]− t∑ i=1 bρi
=⇒ ( t∑ i=1 ρi)− tβ(n, b) ≤ O(log n)/b
Our condition implies that ∑t i=1 ρi > t(β(n, b) + γ). This allows us to rewrite the above inequality as:
tγ ≤ O(log n)/b =⇒ t = O(1)
A.4 OMITTED PROOFS FOR SECTION 4
Lemma A.4. Let X be some set of size n and let f : X → {0, 1} be a random binary function. It holds w.h.p that there exists no function g : X → {0, 1} such that K(g | X) = o(n) and g agrees with f on n(1/2 + Θ(1)) elements in X .
Proof. Let us assume that g agrees with f on all except n elements in X and bound . Using Theorem 2.3, it holds w.h.p that K(f | X) > n−O(log n). We show that if is sufficiently far from 1/2, we can use g to compress f below its Kolmogorov complexity, arriving at a contradiction.
We can construct f using g and the set of values on which they do not agree, which we denote by D. This set is of size n and therefore can be encoded using log ( n n ) ≤ nh( ) bits (recall that
∀0 ≤ k ≤ n, log ( n k ) ≤ nh(k/n)) given X (i.e., K(D | X) ≤ nh( )). To compute f(x) using D, g we simply check if x ∈ D and output g(x) or 1− g(x) accordingly. The total number of bits required for the above is K(g,D | X) ≤ o(n) + nh( ) (where auxiliary variables are subsumed in the o(n) term). We conclude that K(f | X) ≤ o(n) + nh( ). Combining the upper and lower bounds on K(f | X), it must hold that o(n) + nh( ) ≥ n−O(log n) =⇒ h( ) ≥ 1− o(1). This inequality only holds when = 1/2 + o(1).
Lemma A.5. It holds that 1− n(1−λi,j)(j−1)b ≤ λ ′ i,j ≤ nλi,j (j−1)b .
Proof. We can write the following for j ∈ [2, n/b+ 1]: nλi,j = ∑ x∈X acc(Wi,j , x) = ∑ x∈Xi,j−1 acc(Wi,j , x) + ∑ x∈X\Xi,j−1 acc(Wi,j , x)
= (j − 1)bλ′i,j + (n− (j − 1)b)λ′′i,j
=⇒ λ′i,j = nλi,j − (n− (j − 1)b)λ′′i,j
(j − 1)b
Setting λ′′i,j = 0 we get
λ′i,j = nλi,j − (n− (j − 1)b)λ′′i,j
(j − 1)b ≤ nλi,j (j − 1)b
And setting λ′′i,j = 1 we get
λ′i,j = nλi,j − (n− (j − 1)b)λ′′i,j (j − 1)b ≥ 1− n(1− λi,j) (j − 1)b
B EXPERIMENTS
Experimental setup We perform experiments on MNIST dataset and the same data set with random labels (MNIST-RAND). We use SGD with learning rate 0.01 without momentum or regularization. We use a simple fully connected architecture with a single hidden layer, GELU activation units (a differentiable alternative to ReLU) and cross entropy loss. We run experiments with a hidden layer of size 2, 5, 10. We consider batches of size 50, 100, 200. For each of the datasets we run experiments for all configurations of architecture sizes and batch sizes for 300 epochs.
Results Figure 2 and Figure 3 show the accuracy discrepancy and accuracy over epochs for all configurations for MNIST and MNIST-RAND respectively. Figure 4 and Figure 5 show for every batch size the accuracy discrepancy of all three model sizes on the same plot. All of the values displayed are averaged over epochs, i.e., the value for epoch t is 1t ∑ i xi.
First, we indeed observe that the scale of the accuracy discrepancy is inversely quadratic in the batch size, as our analysis suggests. Second, for MNIST-RAND we can clearly see that the average accuracy discrepancy tends below a certain threshold over time, where the threshold appears to be independent of the number of model parameters. We see similar results for MNIST when the model is small, but not when it is large. This is because the model does not reach its capacity within the timeframe of our experiment. | 1. What is the focus of the paper regarding stochastic gradient descent?
2. What are the strengths of the proposed approach, particularly in its application of a powerful idea from another field?
3. How does the reviewer assess the clarity and quality of the paper's content?
4. Do you have any concerns or suggestions regarding the presentation and notation usage in the paper?
5. Are there any minor errors or typos that need to be addressed in the paper? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper is about analyzing the behavior of stochastic gradient descent (SGD) using Kolmogorov complexity.
The authors are using an “entropy compression argument” to prove the termination (and convergence) of SGD under weaker assumptions as well as provide a lower bound on the amount of randomness needed for SGD to escape local minima using perturbations. The main argument originates from a constructive proof of the Lovasz Local Lemma by Moser, and the idea is that: “Suppose that by looking at the execution of a randomized algorithm, one is able to produce a string shorter than the string of random bits that the algorithm used, albeit it is possible to use that shorter string in order to reconstruct the input random string, then the algorithm should terminate as soon as that happens since it is not possible to compress a string of i.i.d random bits.“ For the first result, the data is split in batches and the algorithm terminates when the accuracy reaches some value<1 (e.g 90%), while for each batch the accuracy is 100%. This allows a more efficient “encoding” of the batch samples since only the subset of correctly classified data could be used. This enables the entropy compression argument to go through and prove termination of the SGD algorithm. For the second result, it is assumed that the algorithm used
ℓ
uniformly random bits to choose among
2
ℓ
different perturbations. The authors use a similar argument or random string reconstruction and show that a using a sublinear in the size of the dataset number of random bits, it is not possible for SGD to terminate in polynomial time and non-trivial accuracy.
Strengths And Weaknesses
Strengths:
The paper proposes a novel use of a powerful idea, which even though it comes from a different area, it is demonstrated to potentially have interesting applications in optimization and machine learning.
It is shown that this technique can be used both for positive and negative results.
Weaknesses
The presentation of the paper could be improved. For example, the results need to be more clearly stated in the introduction and I would also suggest that the main results are stated earlier in each of the sections.
Minor comments:
Page 5, second bullet: there is a comma missing in the lhs.
In section 4: I am a bit confused with the use of
m
and
n
. It seems that they should both mean the size of the dataset. So, the notation needs to be merged.
Clarity, Quality, Novelty And Reproducibility
The main ideas are quite novel end clearly explained. However, the presentation in the technical parts should be improved. |
ICLR | Title
SGD Through the Lens of Kolmogorov Complexity
Abstract
We initiate a thorough study of the dynamics of stochastic gradient descent (SGD) under minimal assumptions using the tools of entropy compression. Specifically, we characterize a quantity of interest which we refer to as the accuracy discrepancy. Roughly speaking, this measures the average discrepancy between the model accuracy on batches and large subsets of the entire dataset. We show that if this quantity is sufficiently large, then SGD finds a model which achieves perfect accuracy on the data in O(1) epochs. On the contrary, if the model cannot perfectly fit the data, this quantity must remain below a global threshold, which only depends on the size of the dataset and batch. We use the above framework to lower bound the amount of randomness required to allow (non-stochastic) gradient descent to escape from local minima using perturbations. We show that even if the model is extremely overparameterized, at least a linear (in the size of the dataset) number of random bits are required to guarantee that GD escapes local minima in subexponential time.
1 INTRODUCTION
Stochastic gradient descent (SGD) is at the heart of modern machine learning. However, we are still lacking a theoretical framework that explains its performance for general, non-convex functions. Current results make significant assumptions regarding the model. Global convergence guarantees only hold under specific architectures, activation units, and when models are extremely overparameterized (Du et al., 2019; Allen-Zhu et al., 2019; Zou et al., 2018; Zou and Gu, 2019). In this paper, we take a step back and explore what can be said about SGD under the most minimal assumptions. We only assume that the loss function is differentiable and L-smooth, the learning rate is sufficiently small and that models are initialized randomly. Clearly, we cannot prove general convergence to a global minimum under these assumptions. However, we can try and understand the dynamics of SGD - what types of execution patterns can and cannot happen.
Motivating example: Suppose hypothetically, that for every batch, the accuracy of the model after the Gradient Descent (GD) step on the batch is 100%. However, its accuracy on the set of previously seen batches (including the current batch) remains at 80%. Can this process go on forever? At first glance, this might seem like a possible scenario. However, we show that this cannot be the case. That is, if the above scenario repeats sufficiently often the model must eventually achieve 100% accuracy on the entire dataset.
To show the above, we identify a quantity of interest which we call the accuracy discrepancy (formally defined in Section 3). Roughly speaking, this is how much the model accuracy on a batch differs from the model accuracy on all previous batches in the epoch. We show that when this quantity (averaged over epochs) is higher than a certain threshold, we can guarantee that SGD convergence to 100% accuracy on the dataset within O(1) epochs w.h.p1. We note that this threshold is global, that is, it only depends on the size of the dataset and the size of the batch. In doing so, we provide a sufficient condition for SGD convergence.
The above result is especially interesting when applied to weak models that cannot achieve perfect accuracy on the data. Imagine a dataset of size n with random labels, a model with n0.99 parameters, and a batch of size log n. The above implies that the accuracy discrepancy must eventually go below
1With high probability means a probability of at least 1− 1/n, where n is the size of the dataset.
the global threshold. In other words, the model cannot consistently make significant progress on batches. This is surprising because even though the model is underparameterized with respect to the entire dataset, it is extremely overparameterized with respect to the batch. We verify this observation experimentally (Appendix B). This holds for a single GD step, but what if we were to allow many GD steps per batch, would this mean that we still cannot make significant progress on the batch? This leads us to consider the role of randomness in (non-stochastic) gradient descent.
It is well known that overparameterized models trained using SGD can perfectly fit datasets with random labels (Zhang et al., 2017). It is also known that when models are sufficiently overparameterized (and wide) GD with random initialization convergences to a near global minimum (Du et al., 2019). This leads to an interesting question: how much randomness does GD require to escape local minima efficiently (in polynomial time)? It is obvious that without randomness we could initialize GD next to a local minimum, and it will never escape it. However, what about the case where we are provided an adversarial input and we can perturb that input (for example, by adding a random vector to it), how many bits of randomness are required to guarantee that after the perturbation GD achieves good accuracy on the input in polynomial time?
In Section 4 we show that if the amount of randomness is sublinear in the size of the dataset, then for any differentiable and L-smooth model class (e.g., a neural network architecture), there are datasets that require an exponential running time to achieve any non-trivial accuracy (i.e., better than 1/2 + o(1) for a two-class classification task), even if the model is extremely overparameterized. This result highlights the importance of randomness for the convergence of gradient methods. Specifically, it provides an indication of why SGD converges in certain situations and GD does not. We hope this result opens the door to the design of randomness in other versions of GD.
Outline of our techniques We consider batch SGD, where the dataset is shuffled once at the beginning of each epoch and then divided into batches. We do not deal with the generalization abilities of the model. Thus, the dataset is always the training set. In each epoch, the algorithm goes over the batches one by one, and performs gradient descent to update the model. This is the "vanilla" version of SGD, without any acceleration or regularization (for a formal definition, see Section 2). For the sake of analysis, we add a termination condition after every GD step: if the accuracy on the entire dataset is 100% we terminate. Thus, in our case, termination implies 100% accuracy.
To achieve our results, we make use of entropy compression, first considered by Moser and Tardos (2010) to prove a constructive version of the Lovász local lemma. Roughly speaking, the entropy compression argument allows one to bound the running time of a randomized algorithm2 by leveraging the fact that a random string of bits (the randomness used by the algorithm) is computationally incompressible (has high Kolmogorov complexity) w.h.p. If one can show that throughout the execution of the algorithm, it (implicitly) compresses the randomness it uses, then one can bound the number of iterations the algorithm may execute without terminating. To show that the algorithm has such a property, one would usually consider the algorithm after executing t iterations, and would try to show that just by looking at an "execution log" of the algorithm and some set of "hints", whose size together is considerably smaller than the number of random bits used by the algorithm, it is possible to reconstruct all of the random bits used by the algorithm.
We apply this approach to SGD with an added termination condition when the accuracy over the entire dataset is 100%. Thus, termination in our case guarantees perfect accuracy. The randomness we compress is the bits required to represent the random permutation of the data at every epoch. So indeed the longer SGD executes, the more random bits are generated. We show that under our assumptions it is possible to reconstruct these bits efficiently starting from the dataset X and the model after executing t epochs. The first step in allowing us to reconstruct the random bits of the permutation in each epoch is to show that under the L-smoothness assumption and a sufficiently small step size, SGD is reversible. That is, if we are given a model Wi+1 and a batch Bi such that Wi+1 results from taking a gradient step with model Wi where the loss is calculated with respect to Bi, then we can uniquely retrieve Wi using only Bi and Wi+1. This means that if we can efficiently encode the batches used in every epoch (i.e., using less bits than encoding the entire permutation of the data), we can also retrieve all intermediate models in that epoch (at no additional cost). We prove this claim in Section 2.
2We require that the number of the random bits used is proportional to the execution time of the algorithm. That is, the algorithm flips coins for every iteration of a loop, rather than just a constant number at the beginning of the execution.
The crux of this paper is to show that when the accuracy discrepancy is high for a certain epoch, the batches can indeed be compressed. To exemplify our techniques let us consider the scenario where, in every epoch, just after a single GD step on a batch we consistently achieve perfect accuracy on the batch. Let us consider some epoch of our execution, assume we have access to X , and let Wf be the model at the end of the epoch. If the algorithm did not terminate, then Wf has accuracy at most 1− on the entire dataset (assume for simplicity that is a constant). Our goal is to retrieve the last batch of the epoch, Bf ⊂ X (without knowing the permutation of the data for the epoch). A naive approach would be to simply encode the indices in X of the elements in the batch. However, we can use Wf to achieve a more efficient encoding. Specifically, we know that Wf achieves 1.0 accuracy on Bf but only 1− accuracy on X . Thus it is sufficient to encode the elements of Bf using a smaller subset of X (the elements classified correctly by Wf , which has size at most (1− ) |X|). This allows us to significantly compress Bf . Next, we can use Bf and Wf together with the reversibility of SGD to retrieve Wf−1. We can now repeat the above argument to compress Bf−1 and so on, until we are able to reconstruct all of the random bits used to generate the permutation of X in the epoch. This will result in a linear reduction in the number of bits required for the encoding.
In our analysis, we show a generalized version of the scenario above. We show that high accuracy discrepancy implies that entropy compression occurs. For our second result, we consider a modified SGD algorithm that instead of performing a single GD step per batch, first perturbs the batch with a limited amount of randomness and then performs GD until a desired accuracy on the batch is reached. We assume towards contradiction that GD can always reach the desired accuracy on the batch in subexponential time. This forces the accuracy discrepancy to be high, which guarantees that we always find a model with good accuracy. Applying this reasoning to models of sublinear size and data with random labels we arrive at a contradiction, as such models cannot achieve good accuracy on the data. This implies that when we limit the amount of randomness GD can use for perturbations, there must exist instances where GD requires exponential running time to achieve good accuracy.
Related work There has been a long line of research proving convergence bounds for SGD under various simplifying assumptions such as: linear networks (Arora et al., 2019; 2018), shallow networks (Safran and Shamir, 2018; Du and Lee, 2018; Oymak and Soltanolkotabi, 2019), etc. However, the most general results are the ones dealing with deep, overparameterized networks (Du et al., 2019; Allen-Zhu et al., 2019; Zou et al., 2018; Zou and Gu, 2019). All of these works make use of NTK (Neural Tangent Kernel)(Jacot et al., 2018) and show global convergence guarantees for SGD when the hidden layers have width at least poly(n,L) where n is the size of the dataset and L is the depth of the network. We note that the exponents of the polynomials are quite large.
A recent line of work by Zhang et al. (2022) notes that in many real world scenarios models do not converge to stationary points. They instead take a different approach which, similar to us, studies the dynamics of neural networks. They show that under certain assumptions (e.g., considering a fully connected architecture with sub-differentiable and coordinate-wise Lipschitz activations and weights laying on a compact set) the change in training loss gradually converges to 0, even if the full gradient norms do not vanish.
In (Du et al., 2017) it was shown that GD can take exponential time to escape saddle points, even under random initialization. They provide a highly engineered instance, while our results hold for many model classes of interest. Jin et al. (2017) show that adding perturbations during the executions of GD guarantees that it escapes saddle points. This is done by occasionally perturbing the parameters within a ball of radius r, where r depends on the properties of the function to be optimized. Therefore, a single perturbation must require an amount of randomness linear in the number of parameters.
2 PRELIMINARIES
We consider the following optimization problem. We are given an input (dataset) of size n. Let us denote X = {xi}ni=1 (Our inputs contain both data and labels, we do not need to distinguish them for this work). We also associate every x ∈ X with a unique id of dlog ne bits. We often consider batches of the input B ⊂ X . The size of the batch is denoted by b (all batches have the same size). We have some model whose parameters are denoted by W ∈ Rd, where d is the model dimension. We aim to optimize a goal function of the following type: f(W ) = 1n ∑ x∈X fx(W ), where the functions fx : Rd → R are completely determined by x ∈ X . We also define for every set A ⊆ X: fA(W ) = 1 |A| ∑ x∈A fx(W ). Note that fX = f .
We denote by acc(W,A) : Rd × 2X → [0, 1] the accuracy of model W on the set A ⊆ X (where we use W to classify elements from X). Note that for x ∈ X it holds that acc(W,x) is a binary value indicating whether x is classified correctly or not. We require that every fx is differentiable and L-smooth: ∀W1,W2 ∈ Rd, ‖∇fx(W1)−∇fx(W2)‖ ≤ L‖W1 −W2‖. This implies that every fA is also differentiable and L-smooth. To see this consider the following:
‖∇fA(W1)−∇fA(W2)‖ = ‖ 1 |A| ∑ x∈A ∇fx(W1)− 1 |A| ∑ x∈A ∇fx(W2)‖
= 1 |A| ‖ ∑ x∈A ∇fx(W1)−∇fx(W2)‖ ≤ 1 |A| ∑ x∈A ‖∇fx(W1)−∇fx(W2)‖ ≤ L‖W1 −W2‖
We state another useful property of fA:
Lemma 2.1. Let W1,W2 ∈ Rd and α < 1/L. For any A ⊆ X , if it holds that W1 − α∇fA(W1) = W2 − α∇fA(W2) then W1 = W2.
Proof. Rearranging the terms we get thatW1−W2 = α∇fA(W1)−α∇fA(W2). Now let us consider the norm of both sides: ‖W1−W2‖ = ‖α∇fA(W1)−α∇fA(W2)‖ ≤ α·L‖W1−W2‖ < ‖W1−W2‖ Unless W1 = W2, the final strict inequality holds which leads to a contradiction.
The above means that for a sufficiently small gradient step, the gradient descent process is reversible. That is, we can always recover the previous model parameters given the current ones, assuming that the batch is fixed. We use the notion of reversibility throughout this paper. However, in practice we only have finite precision, thus instead of R we work with the finite set F ⊂ R. Furthermore, due to numerical stability issues, we do not have access to exact gradients, but only to approximate values ∇̂fA. For the rest of this paper, we assume these values are L-smooth on all elements in Fd. That is,
∀W1,W2 ∈ Fd, A ⊆ X, ‖∇̂fA(W1)− ∇̂fA(W2)‖ ≤ L‖W1 −W2‖
This immediately implies that Lemma 2.1 holds even when precision is limited. Let us state the following theorem:
Theorem 2.2. Let W1,W2, ...,Wk ∈ Fd ⊂ Rd, A1, A2, ..., Ak ⊆ X and α < 1/L. If it holds that Wi = Wi−1 − α∇̂fAi−1(Wi−1), then given A1, A2, ..., Ak−1 and Wk we can retrieve W1.
Proof. Given Wk we iterate over all W ∈ Fd until we find W such that Wk = W − α∇̂fAi−1(W ). Using Lemma 2.1, there is only a single element such that this equality holds, and thus W = Wk−1. We repeat this process until we retrieve W1.
SGD We analyze the classic SGD algorithm presented in Algorithm 1. One difference to note in our algorithm, compared to the standard implementation, is the termination condition when the accuracy on the dataset is 100%. In practice the termination condition is not used, however, we only use it to prove that at some point in time the accuracy of the model is 100%.
Algorithm 1: SGD 1 i← 1 // epoch counter 2 W1,1 is an initial model 3 while True do 4 Take a random permutation of X , divided into batches {Bi,j}n/bj=1 5 for j from 1 to n/b do 6 if acc(Wi,j , X) = 1 then Return Wi,j 7 Wi,j+1 ←Wi,j − α∇fBi,j (Wi,j) 8 i← i+ 1, Wi,1 ←Wi−1,n/b+1
Kolmogorov complexity The Kolmogorov complexity of a string x ∈ {0, 1}∗, denoted by K(x), is defined as the size of the smallest prefix Turing machine which outputs this string. We note that this definition depends on which encoding of Turing machines we use. However, one can show that this will only change the Kolmogorov complexity by a constant factor (Li and Vitányi, 2019).
We also use the notion of conditional Kolmogorov complexity, denoted by K(x | y). This is the length of the shortest prefix Turing machine which gets y as an auxiliary input and prints x. Note that the length of y does not count towards the size of the machine which outputs x. So it can be the case that |x| |y| but it holds that K(x | y) < K(x). We can also consider the Kolmogorov complexity of functions. Let g : {0, 1}∗ → {0, 1}∗ then K(g) is the size of the smallest Turing machine which computes the function g.
The following properties of Kolmogorov complexity will be of use. Let x, y, z be three strings:
• Extra information: K(x | y, z) ≤ K(x | z) +O(1) ≤ K(x, y | z) +O(1) • Subadditivity: K(xy | z) ≤ K(x | z, y)+K(y | z)+O(1) ≤ K(x | z)+K(y | z)+O(1)
Random strings have the following useful property (Li and Vitányi, 2019): Theorem 2.3. For an n bit string x chosen uniformly at random, and some string y independent of x (i.e., y is fixed before x is chosen) and any c ∈ N it holds that Pr[K(x | y) ≥ n− c] ≥ 1− 1/2c.
Entropy and KL-divergence Our proofs make extensive use of binary entropy and KL-divergence. In what follows we define these concepts and provide some useful properties.
Entropy: For p ∈ [0, 1] we denote by h(p) = −p log p− (1− p) log(1− p) the entropy of p. Note that h(0) = h(1) = 0.
KL-divergence: For p, q ∈ (0, 1) let DKL(p ‖ q) = p log pq + (1 − p) log 1−p 1−q be the Kullback Leibler divergence (KL-divergence) between two Bernoulli distributions with parameters p, q. We also extend the above for the case where q, p ∈ {0, 1} as follows: DKL(1 ‖ q) = DKL(0 ‖ q) = 0, DKL(p ‖ 1) = log(1/p), DKL(p ‖ 0) = log(1/(1 − p)). This is just notation that agrees with Lemma 2.4. We also state the following result of Pinsker’s inequality applied to Bernoulli random variables: DKL(p ‖ q) ≥ 2(p− q)2. Representing sets Let us state some useful bounds on the Kolmogorov complexity of sets. A more detailed explanation regarding the Kolmogorov complexity of sets and permutations together with the proof to the lemma below appears in Appendix A. Lemma 2.4. Let A ⊆ B, |B| = m, |A| = γm, and let g : B → {0, 1}. For any set Y ⊆ B let Y1 = {x | x ∈ Y, g(x) = 1} , Y0 = Y \ Y1 and κY = |Y1||Y | . It holds that
K(A | B, g) ≤ mγ(log(e/γ)−DKL(κB ‖ κA)) +O(logm)
3 ACCURACY DISCREPANCY
First, let us define some useful notation (Wi,j , Bi,j are formally defined in Algorithm 1):
• λi,j = acc(Wi,j , X). This is the accuracy of the model in epoch i on the entire dataset X , before performing the GD step on batch j.
• ϕi,j = acc(Wi,j , Bi,j−1). This is the accuracy of the model on the (j − 1)-th batch in the i-th epoch after performing the GD step on the batch.
• Xi,j = ⋃j k=1Bi,k (note that ∀i,Xi,0 = ∅, Xi,n/b = X). This is the set of elements in the first j
batches of epoch i. Let us also denote nj = |Xi,j | = jb (Note that ∀j, i1, i2, |Xi1,j | = |Xi2,j |, thus i need not appear in the subscript).
• λ′i,j = acc(Wi,j , Xi,j−1), λ ′′ i,j = acc(Wi,j , X \Xi,j−1), where λ′i,j is the accuracy of the model
on the set of all previously seen batch elements, after performing the GD step on the (j − 1)-th batch and λ′′i,j is the accuracy of the same model, on all remaining elements (j-th batch onward). To avoid computing the accuracy on empty sets, λ′i,j is defined for j ∈ [2, n/b + 1] and λ′′i,j is defined for j ∈ [1, n/b]. • ρi,j = DKL(λ′i,j ‖ ϕi,j) is the accuracy discrepancy for the j-th batch in iteration i and ρi =∑n/b+1 j=2 ρi,j is the accuracy discrepancy at iteration i.
In our analysis, we consider t epochs of the SGD algorithm. Our goal for this section is to derive a connection between ∑t i=1 ρi and t. Bounding t: Our goal is to use the entropy compression argument to show that if ∑t i=1 ρi is sufficiently large we can bound t. Let us start by formally defining the random bits which the algorithm uses. Let ri be the string of random bits representing the random permutation of X at epoch i. As we consider t epochs, let r = r1r2 . . . rt.
Note that the number of bits required to represent an arbitrary permutation of [n] is given by: dlog(n!)e = n log n− n log e+O(log n) = n log(n/e) +O(log n),
where in the above we used Stirling’s approximation. Thus, it holds that |r| = t(n log(n/e) + O(log n)) and according to Theorem 2.3, with probability at least 1 − 1/n2 it holds that K(r) ≥ tn log(n/e)−O(log n). In the following lemma we show how to use the model at every iteration to efficiently reconstruct the batch at that iteration, where the efficiency of reconstruction is expressed via ρi. Lemma 3.1. It holds w.h.p that ∀i ∈ [t] that: K(ri |Wi+1,1, X) ≤ n log ne − bρi + n b ·O(log n)
Proof. Recall that Bi,j is the j-th batch in the i-th epoch, and let Pi,j be a permutation of Bi,j such that the order of the elements in Bi,j under Pi,j is the same as under ri. Note that given X , if we know the partition into batches and all permutations, we can reconstruct ri. According to Theorem 2.2, given Wi,j and Bi,j−1 we can compute Wi,j−1. Let us denote by Y the encoding of this procedure. To implement Y we need to iterate over all possible vectors in Fd and over batch elements to compute the gradients. To express this program we require auxiliary variables of size at most O(log min {d, b}) = O(log n). Thus it holds that K(Y ) = O(log n). Let us abbreviate Bi,1, Bi,2, ..., Bi,j as (Bi,k) j k=1. We write the following. K(ri | X,Wi+1,1) ≤ K(ri, Y | X,Wi+1,1) +O(1) ≤ K(ri | X,Wi+1,1, Y ) +K(Y | X,Wi+1,1) +O(1)
≤ O(log n) +K((Bi,k, Pi,k)n/bk=1 | X,Wi+1,1, Y )
≤ O(log n) +K((Bi,k)n/bk=1 | X,Wi+1,1, Y ) +K((Pi,k) n/b k=1 | X,Wi+1,1, Y )
≤ O(log n) +K((Bi,k)n/bk=1 | X,Wi+1,1, Y ) + n/b∑ j=1 K(Pi,j)
Let us bound K((Bi,k) n/b k=1 | X,Wi+1,1, Y ) by repeatedly using the subadditivity and extra information properties of Kolmogorov complexity.
K((Bi,k) n/b k=1 | X,Y,Wi+1,1) ≤ K(Bi,n/b | X,Wi+1,1) +K((Bi,k) n/b−1 k=1 | X,Y,Wi+1,1, Bi,n/b) +O(1)
≤ K(Bi,n/b | X,Wi+1,1) +K((Bi,k) n/b−1 k=1 | X,Y,Wi,n/b, Bi,n/b) +O(1) ≤ K(Bi,n/b | X,Wi+1,1) +K(Bi,n/b−1 | X,Wi,n/b, Bi,n/b)
+K((Bi,k) n/b−2 k=1 | X,Y,Wi,n/b−1, Bi,n/b, Bi,n/b−1) +O(1)
≤ ... ≤ O(n b ) + n/b∑ j=1 K(Bi,j | X,Wi,j+1, (Bi,k)n/bk=j+1) ≤ O( n b ) + n/b∑ j=1 K(Bi,j | Xi,j ,Wi,j+1)
where in the transitions we used the fact that given Wi,j , Bi,j−1 and Y we can retrieve Wi,j−1. That is, we can always bound K(... | Y,Wi,j , Bi,j−1, ...) by K(... | Y,Wi,j−1, Bi,j−1, ...) +O(1). To encode the order Pi,j inside each batch, b log(b/e) +O(log b) bits are sufficient. Finally we get that: K(ri | X,Wi+1,1) ≤ O(nb ) + ∑n/b j=1[K(Bi,j | Xi,j ,Wi,j+1) + b log(b/e) +O(log b)].
Let us now boundK(Bi,j−1 | Xi,j−1,Wi,j). KnowingXi,j−1 we know thatBi,j−1 ⊆ Xi,j−1. Thus we need to use Wi,j to compress Bi,j−1. Applying Lemma 2.4 with parameters A = Bi,j−1, B = Xi,j−1, γ = b/nj−1, κA = ϕi,j , κB = λ ′ i,j and g(x) = acc(Wi,j , x). We get the following:
K(Bi,j−1 | Xi,j−1,Wi,j) ≤ b(log( e · nj−1
b )− ρi,j) +O(log nj−1)
Adding b log(b/e) +O(log b) to the above, we get the following bound on every element in the sum: b(log( e · nj−1
b )− ρi,j) + b log(b/e) +O(log b) +O(log nj−1) ≤ b log nj−1 − bρi,j +O(log nj−1)
Note that the most important term in the sum is −bρi,j . That is, the more the accuracy of Wi,j on the batch, Bi,j−1, differs from the accuracy of Wi,j on the set of elements containing the batch, Xi,j−1, we can represent the batch more efficiently. Let us now bound the sum: ∑n/b+1 j=2 [b log nj−1 − bρi,j + O(log nj−1)]. Let us first bound the sum over b log nj−1: n/b+1∑ j=2 b log nj−1 = n/b∑ j=1 b log jb = n/b∑ j=1 b(log b+ log j)
= n log b+ b log(n/b)! = n log b+ n log n
b · e +O(log n) = n log
n e +O(log n)
Finally, we can write that:
K(ri | X,Wi+1,1) ≤ O( n
b ) + n/b+1∑ j=2 [b log nj−1 − bρi,j +O(log n)] ≤ n log n e − bρi + n b ·O(log n)
Using the above we know that when the value ρi is sufficiently high, the random permutation of the epoch can be compressed. We use the fact that random strings are incompressible to bound 1 t ∑t i=1 ρi. Theorem 3.2. If the algorithm does not terminate by the t-th iteration, then it holds w.h.p that ∀t, 1t ∑t i=1 ρi ≤ O( n logn b2 ).
Proof. Using arguments similar to Lemma 3.1, we can show that K(r,W1,1 | X) ≤ K(Wt+1,1) + O(t)+ ∑t k=1K(rk | X,Wk+1,1) (formally proved in Lemma A.3). Combining this with Lemma 3.1, we get that K(r,W1,1 | X) ≤ K(Wt+1,1) + t[n(log(n/e) + n·O(logn)b − bρi +O(log n)]. Our proof implies that we can reconstruct not only r, but also W1,1 using X,Wt+1,1. Due to the incompressibility of random strings, we get that w.h.pK(r,W1,1 | X) ≥ d+tn log(n/e)−O(log n). Combining the lower and upper bound for K(r,W1,1 | X) we can get the following inequality:
d+ tn log(n/e)−O(log n) ≤ d+ t[n(log(n/e) + n ·O(log n) b +O(log n)]− t∑ i=1 bρi (1)
=⇒ 1 t t∑ i=1 ρi ≤ n ·O(log n) b2 + O(log n)
b︸ ︷︷ ︸ β(n,b)
+ O(log n)
bt = O(
n log n
b2 )
Let β(n, b) be the exact value of the asymptotic expression in Inequality 1. Theorem 3.2 says that as long as SGD does not terminate the average accuracy discrepeancy cannot be too high. Using the contra-positive we get the following useful corollary (proof is deferred to Appendix A.3).
Corollary 3.3. If ∀k, 1k ∑k i=1 ρi > β(n, b) + γ, for γ = Ω(b
−1 log n), then w.h.p SGD terminates within O(1) epochs.
The case for weak models Using the above we can also derive some interesting negative results when the model is not expressive enough to get perfect accuracy on the data. It must be the case that the average accuracy discrepancy tends below β(n, b) over time. We verify this experimentally on the MNIST dataset (Appendix B), showing that the average accuracy indeed drops over time when the model is weak compared to the dataset. We also confirm that the dependence of the threshold in b is indeed inversely quadratic.
4 THE ROLE OF RANDOMNESS IN GD INITIALIZATION
Our goal for this section is to show that when the amount of randomness in the perturbation is too small, for any model architecture which is differentiable and L-smooth there are inputs for which Algorithm 2 requires exponential time to terminate, even for extremely overparameterized models.
Perturbation families Let us consider a family of 2` functions indexed by length ` real valued vectors Ψ` = {ψz}z∈R` . Recall that throughout this paper we assume finite precision, thus every z can be represented using O(`) bits. We say that Ψ` is a reversible perturbation family if it holds that ∀z ∈ R`, ψz is one-to-one. We often use the notation Ψ`(W ), which means pick z ∈ R` uniformly at random, and apply ψz(W ). We often refer to Ψ` as simply a perturbation.
We note that the above captures a wide range of natural perturbations. For example ψz(W ) = W+Wz where Wz[i] = z[i mod `]. Clearly ψz(W ) is reversible.
Gradient descent The GD algorithm we analyze is formally given in Algorithm 2.
Algorithm 2: GD(W,Y, δ) Input: initial model W , dataset Y , desired accuracy δ 1 i = 1, T = o(2m) + poly(d) 2 W = Ψ`(W ) 3 while acc(W,Y ) < δ and i < T do 4 W ←W − α∇fY (W ) 5 i← i+ 1 6 Return W
Let us denote by m the number of elements in Y . We make the following 2 assumptions for the rest of this section: (1) ` = o(m). (2) There exists T = o(2m) + poly(d) and a perturbation family Ψ` such that for every input W,Y within T iterations GD terminates and returns a solution that has at least δ accuracy on Y with constant probability. We show that the above two assumptions cannot hold together. That is, if the amount of randomness is sublinear in m, there must be instances with exponential running time, even when d m. To show the above, we define a variant of SGD, which uses GD as a sub procedure (Algorithm 3). Assume that our data set is a binary classification task (it is easy to generalize our results to any number of classes), and that elements in X are assigned random labels. Furthermore, let us assume that d = o(n), e.g., d = n0.99. It holds that w.h.p we cannot train a model with d parameters that achieves any accuracy better than 1/2 + o(1) on X (Lemma A.4). Let us take to be a small constant. We show that if assumptions 1 and 2 hold, then Algorithm 3 must terminate and return a model with 1/2 + Θ(1) accuracy on X , leading to a contradiction. Our analysis follows the same line as the previous section, and uses the same notation.
Reversibility First, we must show that Algorithm 3 is still reversible. Note that we can take the same approach as before, where the only difference is that in order to get Wi,j from Wi,j+1 we must now get all the intermediate values from the call to GD. As the GD steps are applied to the same batch, this amounts to applying Lemma 2.1 several times instead of once per iteration. More specifically, we must encode for every batch a number Ti,j = o(2b) + poly(d) = o(2b) + poly(n) (recall that d = o(n)) and apply Lemma 2.1 Ti,j times.
This results in ψz(Wi,j). If we know z,Ψ` then we can retrieve ψz and efficiently retrieve Wi,j using only O(log d) = O(log n) additional bits (by iterating over all values in Fd). Therefore, in every
Algorithm 3: SGD’ 1 i← 1 // epoch counter 2 W1,1 is an initial model 3 while True do 4 Take a random permutation of X , divided into batches {Bi,j}n/bj=1 5 for j from 1 to n/b do 6 if acc(Wi,j , X) ≥ 1/2(1− ) then Return Wi,j 7 Wi,j+1 ← GD(Wi,j , Bi,j , 12(1−2 ) ) 8 i← i+ 1, Wi,1 ←Wi−1,n/b+1
iteration we have the following additional terms: log T +O(log n) + ` = o(b) +O(log n). Summing over n/b iterations we get o(n) per epoch. We state the following Lemma analogous to Lemma 3.1. Lemma 4.1. For Algorithm 3 it holds w.h.p that ∀i ∈ [t] that: K(ri |Wi+1,1, X,Ψ`) ≤ n log ne − bρi + β(n, b) + o(n).
We show that under our assumptions, Algorithm 3 must terminate, leading to a contradiction. Lemma 4.2. Algorithm 3 with b = Ω(log n) terminates within O(T ) iterations w.h.p.
Proof. Our goal is to lower bound ρi = ∑n/b+1 j=2 DKL(λ ′ i,j ‖ ϕi,j). Let us first upper bound λ′i,j . Using the fact that λ′i,j ≤ nλi,j (j−1)b (Lemma A.5) combined with the fact that λi,j ≤ 1/2(1− ) as long as the algorithm does not terminate, we get that ∀j ∈ [2, n/b+ 1] it holds that λ′i,j ≤ n2(1− )(j−1)b . Using the above we conclude that as long as we do not terminate it must hold that λ′i,j ≤ 12(1− )2 whenever j ∈ I = [(1− )n/b+ 1, n/b+ 1]. That is, λ′i,j must be close to λi,j towards the end of the epoch, and therefore must be sufficiently small. Note that |I| ≥ n/b. We know that as long as the algorithm does not terminate it holds that ϕi,j > 1/2(1− 2 ) with some constant probability. Furthermore, this probability is taken over the randomness used in the call to GD (the randomness of the perturbation). This fact allows us to use Hoeffding-type bounds for the ϕi,j variables. If ϕi,j > 1/2(1− 2 ) we say that it is good. Therefore in expectation a constant fraction of ϕi,j , j ∈ I are good. Applying a Hoeffding type bound we get that w.h.p a constant fraction of ϕi,j , j ∈ I are good. Denote these good indices by Ig ⊆ I . We are now ready to bound ρi.
ρi = n/b+1∑ j=2 DKL(λ ′ i,j ‖ ϕi,j) ≥ ∑ j∈Ig DKL(λ ′ i,j ‖ ϕi,j) ≥ ∑ j∈Ig DKL( 1 2(1− )2 ‖ 1 2(1− 2 ) )
≥ Θ(n b ) · ( 1 2(1− 2 ) − 1 2(1− )2 )2 = Θ( n b ) · 5 = Θ(n b )
Where in the transitions we used the fact that KL-divergence is non-negative, and Pinsker’s inequality. Finally, requiring that b = Ω(log n) we get that bρi−β(n, b)− o(n) = Θ(n)−Θ(n lognlog2 n )− o(n) = Θ(n). Following the same calculation as in Corollary 3.3, this guarantees termination within O( lognn ) epochs, or O(T · nb · logn n ) = O(T ) iterations (gradient descent steps).
The above leads to a contradiction. It is critical to note that the above does not hold if T = 2m = 2b or if ` = Θ(n), as both would imply that the o(n) term becomes Θ(n). We state our main theorem: Theorem 4.3. For any differentiable and L-smooth model class with d parameters and a perturbation class Ψ` such that ` = o(m) there exist an input data set Y of size m such that GD requires Ω(2m) iterations to achieve δ accuracy on Y , even if δ = 1/2 + Θ(1) and d m.
A OMITTED PROOFS AND EXPLENATIONS
A.1 REPRESENTING SETS AND PERMUTATIONS
Throughout this paper, we often consider the value K(A) where A is a set. Here the program computing A need only output the elements of A (in any order). When considering K(A | B) such that A ⊆ B, it holds that K(A | B) ≤ dlog (|B| |A| ) e+O(log |B|). To see why, consider Algorithm 4. In the algorithm iA is the index of A when considering some ordering of all subsets of B of size |A|. Thus dlog (|B| |A| ) e bits are sufficient to represent iA. The remaining variables i,mA,mB and any
Algorithm 4: Compute A given B as input 1 mA ← |A| ,mB ← |B| , i← 0, iA is a target index 2 for every subset C ⊆ B s.t |C| = mA (in a predetermined order) do 3 if i = iA then Print C 4 i← i+ 1
additional variables required to construct the set C are all of size at most O(log |B|) and there is at most a constant number of them.
During our analysis, we often bound the Kolmogorov complexity of tuples of objects. For example, K(A,P | B) where A ⊆ B is a set and P : A → [|A|] is a permutation of A (note that A,P together form an ordered tuple of the elements of A). Instead of explicitly presenting a program such as Algorithm 4, we say that if K(A | B) ≤ c1 and c2 bits are sufficient to represent P , thus K(A,P | B) ≤ c1 + c2 +O(1). This just means that we directly have a variable encoding P into the program that computes A given B and uses it in the code. For example, we can add a permutation to Algorithm 4 and output an ordered tuple of elements rather than a set. Note that when representing a permutation of A, |A| = k, instead of using functions, we can just talk about values in dlog k!e. That is, we can decide on some predetermined ordering of all permutations of k elements, and represent a permutation as its number in this ordering.
A.2 OMITTED PROOFS FOR SECTION 2
Lemma A.1. For p ∈ [0, 1] it holds that h(p) ≤ p log(e/p).
Proof. Let us write our lemma as:
h(p) = −p log p− (1− p) log(1− p) ≤ p log(e/p)
Rearranging we get:
− (1− p) log(1− p) ≤ p log p+ p log(1/p) + p log e =⇒ −(1− p) log(1− p) ≤ p log e
=⇒ − ln(1− p) ≤ p (1− p)
Note that − ln(1− p) = ∫ p 0 1 (1−x)dx ≤ p · 1 (1−p) . Where in the final transition we use the fact that
1 (1−x) is monotonically increasing on [0, 1]. This completes the proof.
Lemma A.2. For p, γ, q ∈ [0, 1] where pγ ≤ q, (1− p)γ ≤ (1− q) it holds that
qh( pγ q ) + (1− q)h( (1− p)γ (1− q) ) ≤ h(γ)− γDKL(p ‖ q)
Proof. Let us expand the left hand side using the definition of entropy:
qh( pγ q ) + (1− q)h( (1− p)γ (1− q) )
= −q(pγ q log pγ q + (1− pγ q ) log(1− pγ q ))
− (1− q)( (1− p)γ (1− q) log (1− p)γ (1− q) + (1− (1− p)γ (1− q) ) log(1− (1− p)γ (1− q) ))
= −(pγ log pγ q + (q − pγ) log q − pγ q )
− ((1− p)γ log (1− p)γ (1− q) + ((1− q)− (1− p)γ) log (1− q)− (1− p)γ 1− q )
= −γ log γ − γDKL(p ‖ q)
− (q − pγ)(log q − pγ q )− ((1− q)− (1− p)γ) log (1− q)− (1− p)γ 1− q )
Where in the last equality we simply sum the first terms on both lines. To complete the proof we use the log-sum inequality for the last expression. The log-sum inequality states that: Let {ak}mk=1 , {bk} m k=1 be non-negative numbers and let a = ∑m k=1 ak, b = ∑m k=1 bk, then ∑m k=1 ai log ai bi ≥ a log ab . We apply the log-sum inequality with m = 2, a1 = q − pγ, a2 = (1 − q) − (1 − p)γ, a = 1 − γ and b1 = q, b2 = 1− q, b = 1, getting that:
(q − pγ)(log q − pγ q ) + ((1− q)− (1− p)γ) log (1− q)− (1− p)γ 1− q ) ≥ (1− γ) log(1− γ)
Putting everything together we get that
− γ log γ − γDKL(p ‖ q)
− (q − pγ)(log q − pγ q )− ((1− q)− (1− p)γ) log (1− q)− (1− p)γ 1− q )
≤ −γ log γ − (1− γ) log(1− γ)− γDKL(p ‖ q) = h(γ)− γDKL(p ‖ q)
Lemma 2.4. Let A ⊆ B, |B| = m, |A| = γm, and let g : B → {0, 1}. For any set Y ⊆ B let Y1 = {x | x ∈ Y, g(x) = 1} , Y0 = Y \ Y1 and κY = |Y1||Y | . It holds that
K(A | B, g) ≤ mγ(log(e/γ)−DKL(κB ‖ κA)) +O(logm)
Proof. The algorithm is very similar to Algorithm 4, the main difference is that we must first compute B1, B0 from B using g, and select A1, A0 from B1, B0, respectively, using two indices iA1 , iA0 . Finally we print A = A1 ∪A0. We can now bound the number of bits required to represent iA1 , iA0 . Note that |B1| = κBm, |B0| = (1− κB)m. Note that for A1 we pick γκAm elements from κBm elements and for A0 we pick γ(1− κA)m elements from (1− κB)m elements. The number of bits required to represent this selection is:
dlog ( κBm
γκAm
) e+ dlog ( (1− κB)m γ(1− κA)m ) e ≤ κBmh( γκA κB ) + (1− κB)mh( γ(1− κA) (1− κB) )
≤ m(h(γ)− γDKL(κB ‖ κA)) ≤ mγ(log(e/γ)−DKL(κB ‖ κA)) Where in the first inequality we used the fact that ∀0 ≤ k ≤ n, log ( n k ) ≤ nh(k/n), Lemma A.2 in the second transition, and Lemma A.1 in the third transition. Note that when κA = 0, 1 We only have one term of the initial sum. For example, for κA = 1 we get:
dlog ( κBm
γκAm
) e = dlog ( κBm
γm
) e ≤ κBmh( γ
κB )
≤ mγ log(eκB/γ) = mγ(log(e/γ)− log(1/κB))
And similar computation yields mγ(log(e/γ)− log(1/(1−κB))) for κA = 0. Finally, the additional O(logm) factor is due to various counters and variables, similarly to Algorithm 4.
A.3 OMITTED PROOFS FOR SECTION 3
Lemma A.3. It holds that K(r,W1,1 | X) ≤ K(Wt+1,1) +O(t) + ∑t k=1K(rk | X,Wk+1,1).
Proof. Similarly to the definition of Y in Lemma 3.1, let Y ′ be the program which receives X, ri,Wi+1,1 as input and repeatedly applies Theorem 2.2 to retrieve Wi,1. As Y ′ just needs to reconstruct all batches from X, ri and call Y for n/b times, it holds that K(Y ′) = O(log n). Using the subadditivity and extra information properties of K(), together with the fact that W1,1 can be reconstructed given X,Wt+1,1, Y ′, we write the following:
K(r | X) ≤ K(r,W1,1, Y ′,Wt+1,1 | X) +O(1) ≤ K(W1,1,,Wt+1,1, Y ′ | X) +K(r | X,Y ′,Wt+1,1) +O(1) ≤ K(Wt+1,1 | X) +K(r | X,Y ′,Wt+1,1) +O(log n)
First, we note that: ∀i ∈ [t − 1],K(ri | X,Y ′,Wi+2,1, ri+1) ≤ K(ri | X,Y ′,Wi+1,1) + O(1). Where in the last inequality we simply execute Y ′ on X,Wi+2,1, ri+1 to get Wi+1,1. Let us write:
K(r1r2 . . . rt | X,Y ′,Wt+1,1) ≤ K(rt | X,Y ′,Wt+1,1) +K(r1r2 . . . rt−1 | X,Y ′,Wt+1,1, rt) +O(1) ≤ K(rt | X,Wt+1,1) +K(r1r2 . . . rt−1 | X,Y ′,Wt,1) +O(1) ≤ K(rt | X,Wt+1,1) +K(rt−1 | X,Wt,1) +K(r1r2 . . . rt−2 | X,Y ′,Wt−1,1) +O(1)
≤ · · · ≤ O(t) + t∑
k=1
K(rk | X,Wk+1,1)
Combining everything together we get that:
K(r | X) ≤ K(Wt+1,1) +O(t) + t∑
k=1
K(rk | X,Wk+1,1)
Corollary 3.3. If ∀k, 1k ∑k i=1 ρi > β(n, b) + γ, for γ = Ω(b
−1 log n), then w.h.p SGD terminates within O(1) epochs.
Proof. Let us simplify Inequality 1.
d+ tn log(n/e)−O(log n) ≤ d+ t[n(log(n/e) + n ·O(log n) b +O(log n)]− t∑ i=1 bρi
=⇒ −O(log n) ≤ t[n ·O(log n) b +O(log n)]− t∑ i=1 bρi
=⇒ ( t∑ i=1 ρi)− tβ(n, b) ≤ O(log n)/b
Our condition implies that ∑t i=1 ρi > t(β(n, b) + γ). This allows us to rewrite the above inequality as:
tγ ≤ O(log n)/b =⇒ t = O(1)
A.4 OMITTED PROOFS FOR SECTION 4
Lemma A.4. Let X be some set of size n and let f : X → {0, 1} be a random binary function. It holds w.h.p that there exists no function g : X → {0, 1} such that K(g | X) = o(n) and g agrees with f on n(1/2 + Θ(1)) elements in X .
Proof. Let us assume that g agrees with f on all except n elements in X and bound . Using Theorem 2.3, it holds w.h.p that K(f | X) > n−O(log n). We show that if is sufficiently far from 1/2, we can use g to compress f below its Kolmogorov complexity, arriving at a contradiction.
We can construct f using g and the set of values on which they do not agree, which we denote by D. This set is of size n and therefore can be encoded using log ( n n ) ≤ nh( ) bits (recall that
∀0 ≤ k ≤ n, log ( n k ) ≤ nh(k/n)) given X (i.e., K(D | X) ≤ nh( )). To compute f(x) using D, g we simply check if x ∈ D and output g(x) or 1− g(x) accordingly. The total number of bits required for the above is K(g,D | X) ≤ o(n) + nh( ) (where auxiliary variables are subsumed in the o(n) term). We conclude that K(f | X) ≤ o(n) + nh( ). Combining the upper and lower bounds on K(f | X), it must hold that o(n) + nh( ) ≥ n−O(log n) =⇒ h( ) ≥ 1− o(1). This inequality only holds when = 1/2 + o(1).
Lemma A.5. It holds that 1− n(1−λi,j)(j−1)b ≤ λ ′ i,j ≤ nλi,j (j−1)b .
Proof. We can write the following for j ∈ [2, n/b+ 1]: nλi,j = ∑ x∈X acc(Wi,j , x) = ∑ x∈Xi,j−1 acc(Wi,j , x) + ∑ x∈X\Xi,j−1 acc(Wi,j , x)
= (j − 1)bλ′i,j + (n− (j − 1)b)λ′′i,j
=⇒ λ′i,j = nλi,j − (n− (j − 1)b)λ′′i,j
(j − 1)b
Setting λ′′i,j = 0 we get
λ′i,j = nλi,j − (n− (j − 1)b)λ′′i,j
(j − 1)b ≤ nλi,j (j − 1)b
And setting λ′′i,j = 1 we get
λ′i,j = nλi,j − (n− (j − 1)b)λ′′i,j (j − 1)b ≥ 1− n(1− λi,j) (j − 1)b
B EXPERIMENTS
Experimental setup We perform experiments on MNIST dataset and the same data set with random labels (MNIST-RAND). We use SGD with learning rate 0.01 without momentum or regularization. We use a simple fully connected architecture with a single hidden layer, GELU activation units (a differentiable alternative to ReLU) and cross entropy loss. We run experiments with a hidden layer of size 2, 5, 10. We consider batches of size 50, 100, 200. For each of the datasets we run experiments for all configurations of architecture sizes and batch sizes for 300 epochs.
Results Figure 2 and Figure 3 show the accuracy discrepancy and accuracy over epochs for all configurations for MNIST and MNIST-RAND respectively. Figure 4 and Figure 5 show for every batch size the accuracy discrepancy of all three model sizes on the same plot. All of the values displayed are averaged over epochs, i.e., the value for epoch t is 1t ∑ i xi.
First, we indeed observe that the scale of the accuracy discrepancy is inversely quadratic in the batch size, as our analysis suggests. Second, for MNIST-RAND we can clearly see that the average accuracy discrepancy tends below a certain threshold over time, where the threshold appears to be independent of the number of model parameters. We see similar results for MNIST when the model is small, but not when it is large. This is because the model does not reach its capacity within the timeframe of our experiment. | 1. What is the focus of the paper regarding SGD and GD?
2. What are the strengths of the proposed approach, particularly in terms of analyzing entropy compression?
3. What are the weaknesses of the paper, especially regarding the assumptions made?
4. Do you have any concerns about the bound implications for batch size and practicality?
5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper proves that SGD fits the training data whenever the average accuracy discrepancy over epoches, defined as the sum of accuracy improvement over batches, is large enough. Using a similar idea, this paper also proves GD needs a certain amount of randomness to escape local minimas efficiently.
Strengths And Weaknesses
Strength:
The idea of using entropy compression to analyze SGD is novel and interesting.
Weaknesses:
The assumption of accuracy discrepancy isn't natural: unlike other "prior" assumptions, this assumption involves the process of optimization. The assumption is felt to be designed especially for using the entropy compression tool.
The bound
O
(
n
log
n
b
2
)
implies
b
=
Ω
(
n
)
because
ρ
i
is bounded by a constant, likely
log
2
in practice for this binary classification problem. I wonder whether a batch size as large as
Ω
(
n
)
is practical, or already implies good convergence results itself.
Corollary 3.3 uses a stronger assumption than what's actually needed: it seems to be this only needs to hold on average.
The setting of section 4 seems artificial and I don't see any close relationship to the theme of section 3, except the common tool they use.
Clarity, Quality, Novelty And Reproducibility
Writing of this paper needs improvement. Many mathematical statements seem casual and the structure of paper isn't clear (for example, should there be a formal statement of main theorem in section 3?), let alone typos. The presentation of this paper isn't up to the standards a ML audience expects for this conference. |
ICLR | Title
Learning when to Communicate at Scale in Multiagent Cooperative and Competitive Tasks
Abstract
Learning when to communicate and doing that effectively is essential in multi-agent tasks. Recent works show that continuous communication allows efficient training with back-propagation in multiagent scenarios, but have been restricted to fullycooperative tasks. In this paper, we present Individualized Controlled Continuous Communication Model (IC3Net) which has better training efficiency than simple continuous communication model, and can be applied to semi-cooperative and competitive settings along with the cooperative settings. IC3Net controls continuous communication with a gating mechanism and uses individualized rewards for each agent to gain better performance and scalability while fixing credit assignment issues. Using variety of tasks including StarCraft BroodWarsTM explore and combat scenarios, we show that our network yields improved performance and convergence rates than the baselines as the scale increases. Our results convey that IC3Net agents learn when to communicate based on the scenario and profitability.
1 INTRODUCTION
Communication is an essential element of intelligence as it helps in learning from others experience, work better in teams and pass down knowledge. In multi-agent settings, communication allows agents to cooperate towards common goals. Particularly in partially observable environments, when the agents are observing different parts of the environment, they can share information and learnings from their observation through communication.
Recently, there have been a lot of success in the field of reinforcement learning (RL) in playing Atari Games (Mnih et al., 2015) to playing Go (Silver et al., 2016), most of which have been limited to the single agent domain. However, the number of systems and applications having multi-agents have been growing (Lazaridou et al., 2017; Mordatch & Abbeel, 2018); where size can be from a team of robots working in manufacturing plants to a network of self-driving cars. Thus, it is crucial to successfully scale RL to multi-agent environments in order to build intelligent systems capable of higher productivity. Furthermore, scenarios other than cooperative, namely semi-cooperative (or mixed) and competitive scenarios have not even been studied as extensively for multi-agent systems.
The mixed scenarios can be compared to most of the real life scenarios as humans are cooperative but not fully-cooperative in nature. Humans work towards their individual goals while cooperating with each other. In competitive scenarios, agents are essentially competing with each other for better rewards. In real life, humans always have an option to communicate but can choose when to actually communicate. For example, in a sports match two teams which can communicate, can choose to not communicate at all (to prevent sharing strategies) or use dishonest signaling (to misdirect opponents) (Lehman et al., 2018) in order to optimize their own reward and handicap opponents; making it important to learn when to communicate. ∗Equal contribution. †Current affiliation. This work was completed when authors were at New York University.
Teaching agents how to communicate makes it is unnecessary to hand code the communication protocol with expert knowledge (Sukhbaatar et al., 2016)(Kottur et al., 2017). While the content of communication is important, it is also important to know when to communicate either to increase scalability and performance or to increase competitive edge. For example, a prey needs to learn when to communicate to avoid communicating its location with predators.
Sukhbaatar et al. (2016) showed that agents communicating through a continuous vector are easier to train and have a higher information throughput than communication based on discrete symbols. Their continuous communication is differentiable, so it can be trained efficiently with back-propagation. However, their model assumes full-cooperation between agents and uses average global rewards. This restricts the model from being used in mixed or competitive scenarios as full-cooperation involves sharing hidden states to everyone; exposing everything and leading to poor performance by all agents as shown by our results. Furthermore, the average global reward for all agents makes the credit assignment problem even harder and difficult to scale as agents don’t know their individual contributions in mixed or competitive scenarios where they want themselves to succeed before others.
To solve above mentioned issues, we make the following contributions:
1. We propose Individualized Controlled Continuous Communication Model (IC3Net), in which each agent is trained with its individualized reward and can be applied to any scenario whether cooperative or not.
2. We empirically show that based on the given scenario–using the gating mechanism–our model can learn when to communicate. The gating mechanism allows agents to block their communication; which is useful in competitive scenarios.
3. We conduct experiments on different scales in three chosen environments including StarCraft and show that IC3Net outperforms the baselines with performance gaps that increase with scale. The results show that individual rewards converge faster and better than global rewards.
2 RELATED WORK
The simplest approach in multi-agent reinforcement learning (MARL) settings is to use an independent controller for each agent. This was attempted with Q-learning in Tan (1993). However, in practice it performs poorly (Matignon et al., 2012), which we also show in comparison with our model. The major issue with this approach is that due to multiple agents, the stationarity of the environment is lost and naïve application of experience replay doesn’t work well.
The nature of interaction between agents can either be cooperative, competitive, or a mix of both. Most algorithms are designed only for a particular nature of interaction, mainly cooperative settings (Omidshafiei et al., 2017; Lauer & Riedmiller, 2000; Matignon et al., 2007), with strategies which indirectly arrive at cooperation via sharing policy parameters (Gupta et al., 2017). These algorithms are generally not applicable in competitive or mixed settings. See Busoniu et al. (2008) for survey of MARL in general and Panait & Luke (2005) for survey of cooperative multi-agent learning.
Our work can be considered as an all-scenario extension of Sukhbaatar et al. (2016)’s CommNet for collaboration among multiple agents using continuous communication; usable only in cooperative settings as stated in their work and shown by our experiments. Due to continuous communication, the controller can be learned via backpropagation. However, this model is restricted to fully cooperative tasks as hidden states are fully communicated to others which exposes everything about agent. On the other hand, due to global reward for all agents, CommNet also suffers from credit assignment issue.
The Multi-Agent Deep Deterministic Policy Gradient (MADDPG) model presented by Lowe et al. (2017) also tries to achieve similar goals. However, they differ in the way of providing the coordination signal. In their case, there is no direct communication among agents (actors with different policy per agent), instead a different centralized critic per agent – which can access the actions of all the agents – provides the signal. Concurrently, a similar model using centralized critic and decentralized actors with additional counterfactual reward, COMA by Foerster et al. (2018) was proposed to tackle the challenge of multiagent credit assignment by letting agents know their individual contributions.
Vertex Attention Interaction Networks (VAIN) (Hoshen, 2017) also models multi-agent communication through the use of Interaction Networks (Battaglia et al., 2016) with attention mechanism (Bahdanau et al., 2015) for predictive modelling using supervised settings. The work by Foerster
et al. (2016b) also learns a communication protocol where agents communicate in a discrete manner through their actions. This contrasts with our model where multiple continuous communication cycles can be used at each time step to decide the actions of all agents. Furthermore, our approach is amenable to dynamic number of agents. Peng et al. (2017) also attempts to solve micromanagement tasks in StarCraft using communication. However, they have non-symmetric addition of agents in communication channel and are restricted to only cooperative scenarios.
In contrast, a lot of work has focused on understanding agents’ communication content; mostly in discrete settings with two agents (Wang et al., 2016; Havrylov & Titov, 2017; Kottur et al., 2017; Lazaridou et al., 2017; Lee et al., 2018). Lazaridou et al. (2017) showed that given two neural network agents and a referential game, the agents learn to coordinate. Havrylov & Titov (2017) extended this by grounding communication protocol to a symbols’s sequence while Kottur et al. (2017) showed that this language can be made more human-like by placing certain restrictions. Lee et al. (2018) demonstrated that agents speaking different languages can learn to translate in referential games.
3 MODEL
In this section, we introduce our model Individualized Controlled Continuous Communication Model (IC3Net) as shown in Figure 1 to work in multi-agent cooperative, competitive and mixed settings where agents learn what to communicate as well as when to communicate.
First, let us describe an independent controller model where each agent is controlled by an individual LSTM. For the j-th agent, its policy takes the form of:
ht+1j , s t+1 j = LSTM(e(o t j), h t j , s t j)
atj = π(h t j),
where otj is the observation of the j-th agent at time t, e(·) is an encoder function parameterized by a fully-connected neural network and π is an agent’s action policy. Also, htj and s t j are the hidden and cell states of the LSTM. We use the same LSTM model for all agents, sharing their parameters. This way, the model is invariant to permutations of the agents.
IC3Net extends this independent controller model by allowing agents to communicate their internal state, gated by a discrete action. The policy of the j-th agent in a IC3Net is given by
gt+1j = f g(htj)
ht+1j , s t+1 j = LSTM(e(o t j) + c t j , h t j , s t j)
ct+1j = 1
J − 1 C ∑ j′ 6=j ht+1j′ g t+1 j′
atj = π(h t j),
where ctj is the communication vector for the j-th agent, C is a linear transformation matrix for transforming gated average hidden state to a communication tensor, J is the number of alive agents currently present in the system and fg(.) is a simple network containing a soft-max layer for 2 actions (communicate or not) on top of a linear layer with non-linearity. The binary action gtj specifies whether agent j wants to communicate with others, and act as a gating function when calculating the communication vector. Note that the gating action for next time-step is calculated at current time-step. We train both the action policy π and the gating function fg with REINFORCE (Williams, 1992).
In Sukhbaatar et al. (2016), individual networks controlling agents were interconnected, and they as a whole were considered as a single big neural network. This single big network controller approach required a definition of an unified loss function during training, thus making it impossible to train agents with different rewards.
In this work, however, we move away from the single big network controller approach. Instead, we consider multiple big networks with shared parameters each controlling a single agent separately. Each big network consists of multiple LSTM networks, each processing an observation of a single agent. However, only one of the LSTMs need to output an action because the big network is only controlling a single agent. Although this view has a little effect on the implementation (we can still use a single big network in practice), it allows us to train each agent to maximize its individual reward instead of a single global reward. This has two benefits: (i) it allows the model to be applied to both cooperative and competitive scenarios, (ii) it also helps resolve the credit assignment issue faced by many multi-agent (Sukhbaatar et al., 2016; Foerster et al., 2016a) algorithms while improving performance with scalability and is coherent with the findings in Chang et al. (2003).
4 EXPERIMENTS1
We study our network in multi-agent cooperative, mixed and competitive scenarios to understand its workings. We perform experiments to answer following questions:
1. Can our network learn the gating mechanism to communicate only when needed according to the given scenario? Essentially, is it possible to learn when to communicate?
2. Does our network using individual rewards scales better and faster than the baselines? This would clarify, whether or not, individual rewards perform better than global rewards in multi-agent communication based settings.
We first analyze gating action’s (gt) working. Later, we train our network in three chosen environments with variations in difficulty and coordination to ensure scalability and performance.
4.1 ENVIRONMENTS
We consider three environments for our analysis and experiments. (i) a predator-prey environment (PP) where predators with limited vision look for a prey on a square grid. (ii) a traffic junction environment (TJ) similar to Sukhbaatar et al. (2016) where agents with limited vision must learn to communicate in order to avoid collisions. (iii) StarCraft BroodWars2 (SC) explore and combat
1The code is available at https://github.com/IC3Net/IC3Net. 2StarCraft is a trademark or registered trademark of Blizzard Entertainment, Inc., in the U.S. and/or other countries. Nothing in this paper should not be construed as approval, endorsement, or sponsorship by Blizzard Entertainment, Inc
tasks which test control on multiple agents in various scenarios where agent needs to understand and decouple observations for multiple opposing units.
4.1.1 PREDATOR PREY
In this task, we have n predators (agents) with limited vision trying to find a stationary prey. Once a predator reaches a prey, it stays there and always gets a positive reward, until end of episode (rest of the predators reach prey, or maximum number of steps). In case of zero vision, agents don’t have a direct way of knowing prey’s location unless they jump on it.
We design three cooperation settings (competitive, mixed and cooperative) for this task with different reward structures to test our network. See Appendix 6.3 for details on grid, reward structure, observation and action space. There is no loss or benefit from communicating in mixed scenario. In competitive setting, agents get lower rewards if other agents reach the prey and in cooperative setting, reward increases as more agents reach the prey. We compare with baselines using mixed settings in subsection 4.3.2 while explicitly learning and analyzing gating action’s working in subsection 4.2.
We create three levels for this environment – as mentioned in Appendix 6.3 – to compare our network’s performance with increasing number of agents and grid size. 10×10 grid version with 5 agents is shown in Figure 2 (left). All agents are randomly placed in the grid at start of an episode.
4.1.2 TRAFFIC JUNCTION
Following Sukhbaatar et al. (2016), we test our model on the traffic junction task as it is a good proxy for testing whether communication is working. This task also helps in supporting our claim that IC3Net provides good performance and faster convergence in fully-cooperative scenarios similar to mixed ones. In the traffic junction, cars enter a junction from all entry points with a probability parr. The maximum number of cars at any given time in the junction is limited. Cars can take two actions at each time-step, gas and brake respectively. The task has three difficulty levels (see Figure 2) which vary in the number of possible routes, entry points and junctions. We make this task harder by always setting vision to zero in all the three difficulty levels to ensure that task is not solvable without communication. See Appendix 6.4 for details on reward structure, observation and training.
4.1.3 STARCRAFT: BROODWARS
To fully understand the scalability of our architecture in more realistic and complex scenarios, we test it on StarCraft combat and exploration micro-management tasks in partially observable settings. StarCraft is a challenging environment for RL because it has a large observation-action space, many different unit types and stochasticity. We train our network on Combat and Explore task. The task’s difficulty can be altered by changing the number of our units, enemy units and the map size.
By default, the game has macro-actions which allow a player to directly target an enemy unit which makes player’s unit find the best possible path using the game’s in-built path-finding system, move towards the target and attack when it is in a range. However, we make the task harder by (i) removing macro-actions making exploration harder (ii) limiting vision making environment partially observable(iii) unlike previous works (Wender & Watson, 2012; Ontanón et al., 2013; Usunier et al., 2017; Peng et al., 2017), initializing enemy and our units at random locations in a fixed size square on the map, which makes it challenging to find enemy units. Refer to Appendix 6.5.1 for reward, action, observation and task details. We consider two types of tasks in StarCraft:
Explore: In this task, we have n agents trying to explore the map and find an enemy unit. This is a direct scale-up of the PP but with more realistic and stochastic situations.
Combat: We test an agent’s capability to execute a complex task of combat in StarCraft which require coordination between teammates, exploration of a terrain, understanding of enemy units and formalism of complex strategies. We specifically test a team of n agents trying to find and kill a team of m enemies in a partially observable environment similar to the explore task. The agents, with their limited vision, must find the enemy units and kill all of them to score a win. More information on reward structure, observation and setup can be found in Appendix 6.5.1 and 6.5.2.
4.2 ANALYSIS OF THE GATING MECHANISM
We analyze working of gating action (gt) in IC3Net by using cooperative, competitive and mixed settings in Predator-Prey (4.1.1) and StarCraft explore tasks (4.1.3). However, this time the enemy unit (prey) shares parameters with the predators and is trained with them. All of the enemy unit’s actions are noop which makes it stationary. The enemy unit gets a positive reward equivalent to rtime = 0.05 per timestep until no predator/medic is captures it; after that it gets a reward of 0.
For 5×5 grid in PP task, Figure 3 shows gating action (averaged per epoch) in all scenarios for (i) communication between predator and (ii) communication between prey and predators. We also test
on 50×50 map size for competitive and cooperative StaraCraft explore task and found similar results (Fig. 3d). We can deduce following observations:
• As can be observed in Figure 3a, 3b, 3c and 3d, in all the four cases, the prey learns not to communicate. If the prey communicates, predators will reach it faster. Since it will get 0 reward when an agent comes near or on top of it, it doesn’t communicate to achieve higher rewards. • In cooperative setting (Figure 3a, 3e), the predators are openly communicating with g close to
1. Even though the prey communicates with the predators at the start, it eventually learns not to communicate; so as not to share its location. As all agents are communicating in this setting, it takes more training time to adjust prey’s weights towards silence. Our preliminary tests suggest that in cooperative settings, it is beneficial to fix the gating action to 1.0 as communication is almost always needed and it helps in faster training by skipping the need to train the gating action. • In the mixed setting (Figure 3b), agents don’t always communicate which corresponds to the fact
that there is no benefit or loss by communicating in mixed scenario. The prey is easily able to learn not to communicate as the weights for predators are also adjusted towards non-cooperation from the start itself. • As expected due to competition, predators rarely communicate in competitive setting (Figure 3c,
3d). Note that, this setting is not fully-adversarial as predators can initially explore faster if they communicate which can eventually lead to overall higher rewards. This can be observed as the agents only communicate while it’s profitable for them, i.e. before reaching the prey (Figure 3f)) as communicating afterwards can impact their future rewards.
Experiments in this section, empirically suggest that agents can “learn to communicate when it is profitable”; thus allowing same network to be used in all settings.
4.3 SCALABILITY AND GENERALIZATION EXPERIMENTS
In this section, we look at bigger versions of our environments to understand scalability and generalization aspects of IC3Net.
4.3.1 BASELINES
For training details, refer to Appendix 6.1. We compare IC3Net with baselines specified below in all scenarios.
Individual Reward Independent Controller (IRIC): In this controller, model is applied individually to all of the agents’ observations to produce the action to be taken. Essentially, this can be seen as IC3Net without any communication between agents; but with individualized reward for each agent. Note that no communication makes gating action (gt) ineffective.
Independent Controller (IC - IC3Net w/o Comm and IR): Like IRIC except the agents are trained with a global average reward instead of individual rewards. This will help us understand the credit assignment issue prevalent in CommNet.
CommNet: Introduced in Sukhbaatar et al. (2016), CommNet allows communication between agents over a channel where an agent is provided with the average of hidden state representations of other agents as a communication signal. Like IC3Net, CommNet also uses continuous signals to communicate between the agents. Thus, CommNet can be considered as IC3Net without both the gating action (gt) and individualized rewards.
4.3.2 RESULTS
We discuss major results for our experiments in this section and analyze particular behaviors/patterns of agents in Appendix 6.2.
Predator Prey: Table 1 (left) shows average steps taken by the models to complete an episode i.e. find the prey in mixed setting (we found similar results for cooperative setting shown in appendix). IC3Net reaches prey faster than the baselines as we increase the number of agents as well as the size of the maze. In 20×20 version, the gap in average steps is almost 24 steps, which is a substantial improvement over baselines. Figure 4 (right) shows the scalability graph for IC3Net and CommNet which supports the claim that with the increasing number of agents, IC3Net converges faster at a
better optimum than CommNet. Through these results on the PP task, we can see that compared to IC3Net, CommNet doesn’t work well in mixed scenarios. Finally, Figure 4 (left) shows the training plot of 20×20 grid with 10 agents trying to find a prey. The plot clearly shows the faster performance improvement of IC3Net in contrast to CommNet which takes long time to achieve a minor jump. We also find same pattern of the gating action values as in 4.2.
Traffic Junction: Table 1 (right) shows the success ratio for traffic junction. We fixed the gating action to 1 for TJ as discussed in 4.2. With zero vision, it is not possible to perform well without communication as evident by the results of IRIC and IC. Interestingly, IC performs better than IRIC in the hard case, as we believe without communication, the global reward in TJ acts as a better indicator of the overall performance. On the other hand, with communication and better knowledge of others, the global reward training face a credit assignment issue which is alleviated by IC3Net as evident by its superior performance compared to CommNet. In Sukhbaatar et al. (2016), well-performing agents in the medium and hard versions had vision > 0. With zero vision, IC3Net is to CommNet and IRIC with a performance gap greater than 30%. This verifies that individualized rewards in IC3Net help achieve a better or similar performance than CommNet in fully-cooperative tasks with communication due to a better credit assignment.
StarCraft: Table 2 displays win % and the average number of steps taken to complete an episode in StarCraft explore and combat tasks. We specifically test on (i) Explore task: 10 medics finding 1 enemy medic on 50×50 cell grid (ii) On 75×75 cell grid (iii) Combat task: 10 Marines vs 3 Zealots on 50 x 50 cell grid. Maximum steps in an episode are set to 60. The results on the explore task are similar to Predator-Prey as IC3Net outperforms the baselines. Moving to a bigger map size, we still see the performance gap even though performance drops for all the models.
On the combat task, IC3Net performs comparably well to CommNet. A detailed analysis on IC3Net’s performance in StarCraft tasks is provided in Appendix 6.2.1. To confirm that 10 marines vs 3 zealots is hard to win, we run an experi-
ment on reverse scenario where our agents control 3 Zealots initialized separately and enemies are 10 marines initialized together. We find that both IRIC and IC3Net reach a success percentage of 100% easily. We find that even in this case, IC3Net converges faster than IRIC.
5 CONCLUSIONS AND FUTURE WORK
In this work, we introduced IC3Net which aims to solve multi-agent tasks in various cooperation settings by learning when to communicate. Its continuous communication enables efficient training by backpropagation, while the discrete gating trained by reinforcement learning along with individual rewards allows it to be used in all scenarios and on larger scale.
Through our experiments, we show that IC3Net performs well in cooperative, mixed or competitive settings and learns to communicate only when necessary. Further, we show that agents learn to stop communication in competitive cases. We show scalability of our network by further experiments. In future, we would like to explore possibility of having multi-channel communication where agents can decide on which channel they want to put their information similar to communication groups but dynamic. It would be interesting to provide agents a choice of whether to listen to communication from a channel or not.
Acknowledgements Authors would like to thank Zeming Lin for his consistent support and suggestions around StarCraft and TorchCraft.
6 APPENDIX
6.1 TRAINING DETAILS
We set the hidden layer size to 128 units and we use LSTM (Hochreiter & Schmidhuber, 1997) with recurrence for all of the baselines and IC3Net. We use RMSProp (Tieleman & Hinton, 2012) with initial learning rate as a tuned hyper-parameter. All of the models use skip-connections (He et al., 2016). The training is distributed over 16 cores and each core runs a mini-batch till total episodes steps are 500 or more. We do 10 weight updates per epoch. We run predator-prey, StarCraft experiments for 1000 epochs, traffic junction experiment for 2000 epochs and report the final results. In mixed case, we report the mean score of all agents, while in cooperative case we report any agent’s score as they are same. We implement our model using PyTorch and environments using Gym (Brockman et al., 2016).We use REINFORCE (Williams, 1992) to train our setup. We conduct 5 runs on each of the tasks to compile our results. The training time for different tasks varies; StarCraft tasks usually takes more than a day (depends on number of agents and enemies), while predator-prey and traffic junction tasks complete under 12 hours.
6.2 RESULTS ANALYSIS
In this section, we analyze and discuss behaviors/patterns in the results on our experiments.
6.2.1 IC3NET IN STARCRAFT-COMBAT TASK
As observed in Table 2, IC3Net performs better than CommNet in explore task but doesn’t outperform it on Combat task. Our experiments and visualizations of actual strategy suggested that compared to exploration, combat can be solved far easily if the units learn to stay together. Focused firepower with more attack quantity in general results in quite good results on combat. We verify this hypothesis by running a heuristics baseline “attack closest” in which agents have full vision on map and have macro actions available3. By attacking the closest available enemy together the agents are able to kill zealots with success ratio of 76.6± 8 calculated over 5 runs, even though initialized separately. Also, as described in Appendix 6.5.2, the global reward in case of win in Combat task is relatively huge compared to the individual rewards for killing other units. We believe that with coordination to stay together, huge global rewards and focus fire–which is achievable through simple cooperation–add up to CommNet’s performance in this task.
Further, in exploration we have seen that agents go in separate direction and have individual rewards/sense of exploration which usually leads to faster exploration of an unexplored area. Thinking in simple terms, exploration of an house would be faster if different people handle different rooms. Achieving this is hard in CommNet because global rewards don’t exactly tell your individual contributions if you had explored separately. Also in CommNet, we have observed that agents follow a pattern where they get together at a point and explore together from that point which further signals that using CommNet, it is easy to get together for agents4.
6.2.2 VARIANCE IN IC3NET
In Figure 5, we have observed significant variance in IC3Net results for StarCraft. We performed a lot of experiments on StarCraft and can attribute the significant variance to stochasticity in the environment. There are a huge number of possible states in which agents can end up due to millions of possible interactions and their results in StarCraft. We believe it is hard to learn each one of them. This stochasticity variance can even be seen in simple heuristics baselines like “attack closest” (6.2.1) and is in-fact an indicator of how difficult is it to learn real-world scenarios which also have
3Macro-actions corresponds to “right click” feature in StarCraft and Dota in which a unit can be called to attack on other unit where units follows the shortest path on map towards the unit to be attacked and once reached starts attacking automatically, this essentially overpowers “attack closest” baseline to easily attack anyone under full-vision without any exploration.
4You can observe the above stated pattern for CommNet in PP in this video: https://gfycat.com/IllustriousMarvelousKagu. This video has been generated using trained CommNet model on PP-Hard. Here Red ‘X’ are predators and ‘P’ is the prey to be found. We can observe the pattern where the agents get together to find the prey leading to slack eventually
same amount of stochasticity. We believe that we don’t see similar variance in CommNet and other baselines because adding gating action increases the action-state-space combinations which yields better results while being difficult to learn sometimes. Further, this variance is only observed in higher Win % models which requires to learn more state spaces.
6.2.3 COMMNET IN STARCRAFT-EXPLORE TASKS
In Table 2, we can observe that CommNet performs worse than IRIC and IC in case of StarCraftExplore task. In this section, we provide a hypothesis for this result. First, we need to notice is that IRIC is better than IC also overall, which points to the fact that individualized reward are better than global rewards in case of exploration. This makes sense because if agents cover more area and know how much they covered through their own contribution (individual reward), it should lead to overall more coverage, compared to global rewards where agents can’t figure out their own coverage but instead overall one. Second, in case of CommNet, it is easy to communicate and get together. We observe this pattern in CommNet4 where agents first get together at a point and then start exploring from there which leads to slow exploration, but IC is better in this respect because it is hard to gather at single point which inherently leads to faster exploration than CommNet. Third, the reward structure in the case of mixed scenario doesn’t appreciate searching together which is not directly visible to CommNet and IC due to global rewards.
6.3 DETAILS OF PREDATOR PREY
In all the three settings, cooperative, competitive and mixed, a predator agent gets a constant time-step penalty rexplore = −0.05, until it reaches the prey. This makes sure that agent doesn’t slack in finding the prey. In the mixed setting, once an agent reaches the prey, the agent always gets a positive reward rprey = 0.05 which doesn’t depend on the number of agents on prey. . Similarly, in the cooperative setting, an agent gets a positive reward of rcoop = rprey * n, and in the competitive setting, an agent gets a positive reward of rcomp = rprey / n after it reaches the prey, where n is the number of agents on the prey. The total reward at time t for an agent i can be written as:
rppi (t) = δi ∗ rexplore + (1− δi) ∗ n λ t ∗ rprey ∗ |λ|
where δi denotes whether agent i has found the prey or not, nt is number of agents on prey at time-step t and λ is -1, 0 and 1 in the competitive, mixed and cooperative scenarios respectively. Maximum episode steps are set to 20, 40 and 80 for 5×5, 10×10 and 20×20 grids respectively. The number of predators are 5, 10 and 20 in 5×5, 10×10 and 20×20 grids respectively. Each predator can take one of the five basic movement actions i.e. up, down, left, right or stay. Predator, prey and all locations on grid are considered unique classes in vocabulary and are represented as one-hot binary vectors. Observation obs, at each point will be the sum of all one-hot binary vectors of location, predators and prey present at that point. With vision of 1, observation of each agent have dimension 32 × |obs|.
6.3.1 EXTRA EXPERIMENTS
Table 3 shows the results for IC3Net and baselines in the cooperative scenario for the predator-prey environment. As the cooperative reward function provides more reward after a predator reaches the prey, the comparison is provided for rewards instead of average number of steps. IC3Net performs better or equal to CommNet and other baselines in all three difficulty levels. The performance gap closes in and increases as we move towards bigger grids which shows that IC3Net is more scalable
due to individualized rewards. More importantly, even with the extra gating action training, IC3Net can perform comparably to CommNet which is designed for cooperative scenarios which suggests that IC3Net is a suitable choice for all cooperation settings.
To analyze the effect of gating action on rewards in case of mixed scenario where individualized rewards alone can help a lot, we test Predator Prey mixed cooperation setting on 20x20 grid on a baseline in which we set gating action to 1 (global communication) and uses individual rewards (IC2Net/CommNet + IR). We find average max steps to be 50.24± 3.4 which is lower than IC3Net. This means that (i) individualized rewards help a lot in mixed scenarios by allowing agents to understand there contributions (ii) adding the gating action in this case has an overhead but allows the same model to work in all settings (even competitive) by “learning to communicate” which is more close to real-world humans with a negligible hit on the performance.
6.4 DETAILS OF TRAFFIC JUNCTION
Traffic junction’s observation vocabulary has one-hot vectors for all locations in the grid and car class. Each agent observes its previous action, route identifier and a vector specifying sum of one-hot vectors for all classes present at that agent’s location. Collision occurs when two cars are on same location. We set maximum number of steps to 20, 40 and 60 in easy, medium and hard difficulty respectively. Similar to Sukhbaatar et al. (2016), we provide a negative reward rcoll = -10 on collision. To cut off traffic jams, we provide a negative reward τirtime = -0.01 τi where τi is time spent by the agent in the junction at time-step t. Reward for ith agent which is having Cti collisions at time-step t can be written as:
rtji (t) = rcollC t i + rtimeτi
We utilized curriculum learning Bengio et al. (2009) to make the training process easier. The parrive is kept at the start value till the first 250 epochs and is then linearly increased till the end value during the course from 250th to 1250th epoch. The start and end values of parrive for different difficulty levels are indicated in Table 4. Finally, training continues for another 750 epochs. The learning rate is fixed at 0.003 throughout. We also implemented three difficulty variations of the game explained as follows.
The easy version is a junction of two one-way roads on a 7× 7 grid. There are two arrival points, each with two possible routes and with a Ntotal value of 5.
The medium version consists of two connected junctions of two-way roads in 14 × 14 as shown in Figure 2 (right). There are 4 arrival points and 3 different routes for each arrival point and have Ntotal = 20.
The harder version consists of four connected junctions of two-way roads in 18 × 18 as shown in Figure 6. There are 8 arrival points and 7 different routes for each arrival point and have Ntotal = 20.
6.4.1 IRIC AND IC PERFORMANCE
In Table 1, we notice that IRIC and IC perform worst in medium level compared to the hard level. Our visualizations suggest that this is due to high final add-rate in case of medium version compared to hard version. Collisions happen much more often in medium version leading to less success rate (an episode is considered failure if a collision happens) compared to hard where initial add-rate is low to accommodate curriculum learning for hard version’s big grid size. The final add-rate in case of hard level is comparatively low to make sure that it is possible to pass a junction without a collision as with more entry points it is easy to collide even with a small add-rate.
6.5 STARCRAFT DETAILS
6.5.1 OBSERVATION AND ACTIONS
Explore: To complete the explore task, agents must be within a particular range of enemy unit called explore vision. Once an agent is within explore vision of enemy unit, we noop further actions. The reward structure is same as the PP task with only difference being that an agent needs to be within the explore vision range of the enemy unit instead of being on same location to get a non-negative reward. We use medic units which don’t attack enemy units. This ensures that we can simulate our explore task without any of kind of combat happening and interfering with the goal of the task. Observation for each agent is its (absolute x, absolute y) and enemy’s (relative x, relative y, visible) where visible, relative x and relative y are 0 when enemy is not in explore vision range. Agents have 9 actions to choose from which includes 8 basic directions and one stay action.
Combat: Agent observes its own (absolute x, absolute y, healthpoints + shield, weapon cooldown, previous action) and (relative x, relative y, visible, healthpoints + shield, weapon cooldown) for each of the enemies. relative x and relative y are only observed when enemy is visible which is corresponded by visible flag. All of the observations are normalized to be in between (0, 1). Agent has to choose from 9 + m actions which include 9 basic actions and 1 action for attacking each of the m agents. Attack actions only work when the enemy is within the sight range of the agent, otherwise it is a noop. In combat, we don’t compare with prior work on StarCraft because our environment setting is much harder, restrictive, new and different, thus, not directly comparable.
6.5.2 COMBAT REWARD
To avoid slack in finding the enemy team, we provide a negative reward rtime = -0.01 at each timestep when the agent is not involved in a combat. At each timestep, an agent gets as reward the difference between (i) its normalized health in current and previous timestep (ii) normalized health at previous timestep and current timestep for each of the enemies it has attacked till now. At the end of the episode, terminal reward for each agents consists of (i) all its remaining health * 3 as negative reward (ii) 5 * m + all its remaining health * 3 as positive reward if agents win (iii) normalized remaining health * 3 for all of the alive enemies as negative reward on lose. In this task, the group of enemies is initialized together randomly in one half of the map and our agents are initialized separately in other half which makes task even harder, thus requiring communication. For an automatic way of individualizing rewards, please refer to Foerster et al. (2018).
6.5.3 EXAMPLE SEQUENCE OF STATES IN COOPERATIVE EXPLORE MODE
We provide an example sequence of states in StarCraft cooperative explore mode in Figure 7. As soon as one of the agents finds the enemy unit, the other agents get the information about enemy’s location through communication and are able to reach it faster. | 1. What are the key contributions and novel aspects introduced by the paper in multi-agent reinforcement learning?
2. What are the strengths of the paper compared to prior works?
3. Do you have any questions regarding the applicability of the proposed method in non-cooperative settings?
4. Why does the proposed method seem to have significantly higher variance than the baselines in some experimental results?
5. Can you provide more details on why the method is highlighted in bold in some places, even though it doesn't outperform the baseline? | Review | Review
The authors propose a new network architecture for multi-agent reinforcement learning. The new architecture addresses three issues: (1) the applicability of existing algorithms to semi-cooperative or competitive settings; (2) the ability to use local rewards during agent training; (3) the credit assignment problem with global multi-agent rewards. The authors address these issues with a new architecture that is comprised of several LSTM controllers with tied weights that transmit a continuous vector to each other, and that are augment with a gating mechanism that allows them to abstain from communicating.
I think that this paper makes a solid contribution over the existing literature. My main comments are the following:
* I feel like the paper can be strengthened by comparing to additional baselines. The authors compare mainly to Sukhbataar et al., but I think a more detailed comparison to other approaches (e.g. Foerster et al.)
* One of the advantages of this method is that it can be used in non-cooperative settings. I am not familiar with this regime, and I would like a better explanation about why we would train competing agent with the same controller, rather than using a different controller for each team.
* In several experimental results, the proposed method seems to have significantly higher variance than the baselines. I would like to see some discussion about why it is the case.
* Also, in some places (e.g. Table 1), the method is highlighted in bold, even though it doesn’t actually outperform the baseline. Please correct this and only highlight the best method (if several methods are tied, either highlight them one, or don’t highlight any).
* Also, in some cases when the error bars contain the previous best result, I am not sure if we can say that the proposed method is obviously better. |
ICLR | Title
Learning when to Communicate at Scale in Multiagent Cooperative and Competitive Tasks
Abstract
Learning when to communicate and doing that effectively is essential in multi-agent tasks. Recent works show that continuous communication allows efficient training with back-propagation in multiagent scenarios, but have been restricted to fullycooperative tasks. In this paper, we present Individualized Controlled Continuous Communication Model (IC3Net) which has better training efficiency than simple continuous communication model, and can be applied to semi-cooperative and competitive settings along with the cooperative settings. IC3Net controls continuous communication with a gating mechanism and uses individualized rewards for each agent to gain better performance and scalability while fixing credit assignment issues. Using variety of tasks including StarCraft BroodWarsTM explore and combat scenarios, we show that our network yields improved performance and convergence rates than the baselines as the scale increases. Our results convey that IC3Net agents learn when to communicate based on the scenario and profitability.
1 INTRODUCTION
Communication is an essential element of intelligence as it helps in learning from others experience, work better in teams and pass down knowledge. In multi-agent settings, communication allows agents to cooperate towards common goals. Particularly in partially observable environments, when the agents are observing different parts of the environment, they can share information and learnings from their observation through communication.
Recently, there have been a lot of success in the field of reinforcement learning (RL) in playing Atari Games (Mnih et al., 2015) to playing Go (Silver et al., 2016), most of which have been limited to the single agent domain. However, the number of systems and applications having multi-agents have been growing (Lazaridou et al., 2017; Mordatch & Abbeel, 2018); where size can be from a team of robots working in manufacturing plants to a network of self-driving cars. Thus, it is crucial to successfully scale RL to multi-agent environments in order to build intelligent systems capable of higher productivity. Furthermore, scenarios other than cooperative, namely semi-cooperative (or mixed) and competitive scenarios have not even been studied as extensively for multi-agent systems.
The mixed scenarios can be compared to most of the real life scenarios as humans are cooperative but not fully-cooperative in nature. Humans work towards their individual goals while cooperating with each other. In competitive scenarios, agents are essentially competing with each other for better rewards. In real life, humans always have an option to communicate but can choose when to actually communicate. For example, in a sports match two teams which can communicate, can choose to not communicate at all (to prevent sharing strategies) or use dishonest signaling (to misdirect opponents) (Lehman et al., 2018) in order to optimize their own reward and handicap opponents; making it important to learn when to communicate. ∗Equal contribution. †Current affiliation. This work was completed when authors were at New York University.
Teaching agents how to communicate makes it is unnecessary to hand code the communication protocol with expert knowledge (Sukhbaatar et al., 2016)(Kottur et al., 2017). While the content of communication is important, it is also important to know when to communicate either to increase scalability and performance or to increase competitive edge. For example, a prey needs to learn when to communicate to avoid communicating its location with predators.
Sukhbaatar et al. (2016) showed that agents communicating through a continuous vector are easier to train and have a higher information throughput than communication based on discrete symbols. Their continuous communication is differentiable, so it can be trained efficiently with back-propagation. However, their model assumes full-cooperation between agents and uses average global rewards. This restricts the model from being used in mixed or competitive scenarios as full-cooperation involves sharing hidden states to everyone; exposing everything and leading to poor performance by all agents as shown by our results. Furthermore, the average global reward for all agents makes the credit assignment problem even harder and difficult to scale as agents don’t know their individual contributions in mixed or competitive scenarios where they want themselves to succeed before others.
To solve above mentioned issues, we make the following contributions:
1. We propose Individualized Controlled Continuous Communication Model (IC3Net), in which each agent is trained with its individualized reward and can be applied to any scenario whether cooperative or not.
2. We empirically show that based on the given scenario–using the gating mechanism–our model can learn when to communicate. The gating mechanism allows agents to block their communication; which is useful in competitive scenarios.
3. We conduct experiments on different scales in three chosen environments including StarCraft and show that IC3Net outperforms the baselines with performance gaps that increase with scale. The results show that individual rewards converge faster and better than global rewards.
2 RELATED WORK
The simplest approach in multi-agent reinforcement learning (MARL) settings is to use an independent controller for each agent. This was attempted with Q-learning in Tan (1993). However, in practice it performs poorly (Matignon et al., 2012), which we also show in comparison with our model. The major issue with this approach is that due to multiple agents, the stationarity of the environment is lost and naïve application of experience replay doesn’t work well.
The nature of interaction between agents can either be cooperative, competitive, or a mix of both. Most algorithms are designed only for a particular nature of interaction, mainly cooperative settings (Omidshafiei et al., 2017; Lauer & Riedmiller, 2000; Matignon et al., 2007), with strategies which indirectly arrive at cooperation via sharing policy parameters (Gupta et al., 2017). These algorithms are generally not applicable in competitive or mixed settings. See Busoniu et al. (2008) for survey of MARL in general and Panait & Luke (2005) for survey of cooperative multi-agent learning.
Our work can be considered as an all-scenario extension of Sukhbaatar et al. (2016)’s CommNet for collaboration among multiple agents using continuous communication; usable only in cooperative settings as stated in their work and shown by our experiments. Due to continuous communication, the controller can be learned via backpropagation. However, this model is restricted to fully cooperative tasks as hidden states are fully communicated to others which exposes everything about agent. On the other hand, due to global reward for all agents, CommNet also suffers from credit assignment issue.
The Multi-Agent Deep Deterministic Policy Gradient (MADDPG) model presented by Lowe et al. (2017) also tries to achieve similar goals. However, they differ in the way of providing the coordination signal. In their case, there is no direct communication among agents (actors with different policy per agent), instead a different centralized critic per agent – which can access the actions of all the agents – provides the signal. Concurrently, a similar model using centralized critic and decentralized actors with additional counterfactual reward, COMA by Foerster et al. (2018) was proposed to tackle the challenge of multiagent credit assignment by letting agents know their individual contributions.
Vertex Attention Interaction Networks (VAIN) (Hoshen, 2017) also models multi-agent communication through the use of Interaction Networks (Battaglia et al., 2016) with attention mechanism (Bahdanau et al., 2015) for predictive modelling using supervised settings. The work by Foerster
et al. (2016b) also learns a communication protocol where agents communicate in a discrete manner through their actions. This contrasts with our model where multiple continuous communication cycles can be used at each time step to decide the actions of all agents. Furthermore, our approach is amenable to dynamic number of agents. Peng et al. (2017) also attempts to solve micromanagement tasks in StarCraft using communication. However, they have non-symmetric addition of agents in communication channel and are restricted to only cooperative scenarios.
In contrast, a lot of work has focused on understanding agents’ communication content; mostly in discrete settings with two agents (Wang et al., 2016; Havrylov & Titov, 2017; Kottur et al., 2017; Lazaridou et al., 2017; Lee et al., 2018). Lazaridou et al. (2017) showed that given two neural network agents and a referential game, the agents learn to coordinate. Havrylov & Titov (2017) extended this by grounding communication protocol to a symbols’s sequence while Kottur et al. (2017) showed that this language can be made more human-like by placing certain restrictions. Lee et al. (2018) demonstrated that agents speaking different languages can learn to translate in referential games.
3 MODEL
In this section, we introduce our model Individualized Controlled Continuous Communication Model (IC3Net) as shown in Figure 1 to work in multi-agent cooperative, competitive and mixed settings where agents learn what to communicate as well as when to communicate.
First, let us describe an independent controller model where each agent is controlled by an individual LSTM. For the j-th agent, its policy takes the form of:
ht+1j , s t+1 j = LSTM(e(o t j), h t j , s t j)
atj = π(h t j),
where otj is the observation of the j-th agent at time t, e(·) is an encoder function parameterized by a fully-connected neural network and π is an agent’s action policy. Also, htj and s t j are the hidden and cell states of the LSTM. We use the same LSTM model for all agents, sharing their parameters. This way, the model is invariant to permutations of the agents.
IC3Net extends this independent controller model by allowing agents to communicate their internal state, gated by a discrete action. The policy of the j-th agent in a IC3Net is given by
gt+1j = f g(htj)
ht+1j , s t+1 j = LSTM(e(o t j) + c t j , h t j , s t j)
ct+1j = 1
J − 1 C ∑ j′ 6=j ht+1j′ g t+1 j′
atj = π(h t j),
where ctj is the communication vector for the j-th agent, C is a linear transformation matrix for transforming gated average hidden state to a communication tensor, J is the number of alive agents currently present in the system and fg(.) is a simple network containing a soft-max layer for 2 actions (communicate or not) on top of a linear layer with non-linearity. The binary action gtj specifies whether agent j wants to communicate with others, and act as a gating function when calculating the communication vector. Note that the gating action for next time-step is calculated at current time-step. We train both the action policy π and the gating function fg with REINFORCE (Williams, 1992).
In Sukhbaatar et al. (2016), individual networks controlling agents were interconnected, and they as a whole were considered as a single big neural network. This single big network controller approach required a definition of an unified loss function during training, thus making it impossible to train agents with different rewards.
In this work, however, we move away from the single big network controller approach. Instead, we consider multiple big networks with shared parameters each controlling a single agent separately. Each big network consists of multiple LSTM networks, each processing an observation of a single agent. However, only one of the LSTMs need to output an action because the big network is only controlling a single agent. Although this view has a little effect on the implementation (we can still use a single big network in practice), it allows us to train each agent to maximize its individual reward instead of a single global reward. This has two benefits: (i) it allows the model to be applied to both cooperative and competitive scenarios, (ii) it also helps resolve the credit assignment issue faced by many multi-agent (Sukhbaatar et al., 2016; Foerster et al., 2016a) algorithms while improving performance with scalability and is coherent with the findings in Chang et al. (2003).
4 EXPERIMENTS1
We study our network in multi-agent cooperative, mixed and competitive scenarios to understand its workings. We perform experiments to answer following questions:
1. Can our network learn the gating mechanism to communicate only when needed according to the given scenario? Essentially, is it possible to learn when to communicate?
2. Does our network using individual rewards scales better and faster than the baselines? This would clarify, whether or not, individual rewards perform better than global rewards in multi-agent communication based settings.
We first analyze gating action’s (gt) working. Later, we train our network in three chosen environments with variations in difficulty and coordination to ensure scalability and performance.
4.1 ENVIRONMENTS
We consider three environments for our analysis and experiments. (i) a predator-prey environment (PP) where predators with limited vision look for a prey on a square grid. (ii) a traffic junction environment (TJ) similar to Sukhbaatar et al. (2016) where agents with limited vision must learn to communicate in order to avoid collisions. (iii) StarCraft BroodWars2 (SC) explore and combat
1The code is available at https://github.com/IC3Net/IC3Net. 2StarCraft is a trademark or registered trademark of Blizzard Entertainment, Inc., in the U.S. and/or other countries. Nothing in this paper should not be construed as approval, endorsement, or sponsorship by Blizzard Entertainment, Inc
tasks which test control on multiple agents in various scenarios where agent needs to understand and decouple observations for multiple opposing units.
4.1.1 PREDATOR PREY
In this task, we have n predators (agents) with limited vision trying to find a stationary prey. Once a predator reaches a prey, it stays there and always gets a positive reward, until end of episode (rest of the predators reach prey, or maximum number of steps). In case of zero vision, agents don’t have a direct way of knowing prey’s location unless they jump on it.
We design three cooperation settings (competitive, mixed and cooperative) for this task with different reward structures to test our network. See Appendix 6.3 for details on grid, reward structure, observation and action space. There is no loss or benefit from communicating in mixed scenario. In competitive setting, agents get lower rewards if other agents reach the prey and in cooperative setting, reward increases as more agents reach the prey. We compare with baselines using mixed settings in subsection 4.3.2 while explicitly learning and analyzing gating action’s working in subsection 4.2.
We create three levels for this environment – as mentioned in Appendix 6.3 – to compare our network’s performance with increasing number of agents and grid size. 10×10 grid version with 5 agents is shown in Figure 2 (left). All agents are randomly placed in the grid at start of an episode.
4.1.2 TRAFFIC JUNCTION
Following Sukhbaatar et al. (2016), we test our model on the traffic junction task as it is a good proxy for testing whether communication is working. This task also helps in supporting our claim that IC3Net provides good performance and faster convergence in fully-cooperative scenarios similar to mixed ones. In the traffic junction, cars enter a junction from all entry points with a probability parr. The maximum number of cars at any given time in the junction is limited. Cars can take two actions at each time-step, gas and brake respectively. The task has three difficulty levels (see Figure 2) which vary in the number of possible routes, entry points and junctions. We make this task harder by always setting vision to zero in all the three difficulty levels to ensure that task is not solvable without communication. See Appendix 6.4 for details on reward structure, observation and training.
4.1.3 STARCRAFT: BROODWARS
To fully understand the scalability of our architecture in more realistic and complex scenarios, we test it on StarCraft combat and exploration micro-management tasks in partially observable settings. StarCraft is a challenging environment for RL because it has a large observation-action space, many different unit types and stochasticity. We train our network on Combat and Explore task. The task’s difficulty can be altered by changing the number of our units, enemy units and the map size.
By default, the game has macro-actions which allow a player to directly target an enemy unit which makes player’s unit find the best possible path using the game’s in-built path-finding system, move towards the target and attack when it is in a range. However, we make the task harder by (i) removing macro-actions making exploration harder (ii) limiting vision making environment partially observable(iii) unlike previous works (Wender & Watson, 2012; Ontanón et al., 2013; Usunier et al., 2017; Peng et al., 2017), initializing enemy and our units at random locations in a fixed size square on the map, which makes it challenging to find enemy units. Refer to Appendix 6.5.1 for reward, action, observation and task details. We consider two types of tasks in StarCraft:
Explore: In this task, we have n agents trying to explore the map and find an enemy unit. This is a direct scale-up of the PP but with more realistic and stochastic situations.
Combat: We test an agent’s capability to execute a complex task of combat in StarCraft which require coordination between teammates, exploration of a terrain, understanding of enemy units and formalism of complex strategies. We specifically test a team of n agents trying to find and kill a team of m enemies in a partially observable environment similar to the explore task. The agents, with their limited vision, must find the enemy units and kill all of them to score a win. More information on reward structure, observation and setup can be found in Appendix 6.5.1 and 6.5.2.
4.2 ANALYSIS OF THE GATING MECHANISM
We analyze working of gating action (gt) in IC3Net by using cooperative, competitive and mixed settings in Predator-Prey (4.1.1) and StarCraft explore tasks (4.1.3). However, this time the enemy unit (prey) shares parameters with the predators and is trained with them. All of the enemy unit’s actions are noop which makes it stationary. The enemy unit gets a positive reward equivalent to rtime = 0.05 per timestep until no predator/medic is captures it; after that it gets a reward of 0.
For 5×5 grid in PP task, Figure 3 shows gating action (averaged per epoch) in all scenarios for (i) communication between predator and (ii) communication between prey and predators. We also test
on 50×50 map size for competitive and cooperative StaraCraft explore task and found similar results (Fig. 3d). We can deduce following observations:
• As can be observed in Figure 3a, 3b, 3c and 3d, in all the four cases, the prey learns not to communicate. If the prey communicates, predators will reach it faster. Since it will get 0 reward when an agent comes near or on top of it, it doesn’t communicate to achieve higher rewards. • In cooperative setting (Figure 3a, 3e), the predators are openly communicating with g close to
1. Even though the prey communicates with the predators at the start, it eventually learns not to communicate; so as not to share its location. As all agents are communicating in this setting, it takes more training time to adjust prey’s weights towards silence. Our preliminary tests suggest that in cooperative settings, it is beneficial to fix the gating action to 1.0 as communication is almost always needed and it helps in faster training by skipping the need to train the gating action. • In the mixed setting (Figure 3b), agents don’t always communicate which corresponds to the fact
that there is no benefit or loss by communicating in mixed scenario. The prey is easily able to learn not to communicate as the weights for predators are also adjusted towards non-cooperation from the start itself. • As expected due to competition, predators rarely communicate in competitive setting (Figure 3c,
3d). Note that, this setting is not fully-adversarial as predators can initially explore faster if they communicate which can eventually lead to overall higher rewards. This can be observed as the agents only communicate while it’s profitable for them, i.e. before reaching the prey (Figure 3f)) as communicating afterwards can impact their future rewards.
Experiments in this section, empirically suggest that agents can “learn to communicate when it is profitable”; thus allowing same network to be used in all settings.
4.3 SCALABILITY AND GENERALIZATION EXPERIMENTS
In this section, we look at bigger versions of our environments to understand scalability and generalization aspects of IC3Net.
4.3.1 BASELINES
For training details, refer to Appendix 6.1. We compare IC3Net with baselines specified below in all scenarios.
Individual Reward Independent Controller (IRIC): In this controller, model is applied individually to all of the agents’ observations to produce the action to be taken. Essentially, this can be seen as IC3Net without any communication between agents; but with individualized reward for each agent. Note that no communication makes gating action (gt) ineffective.
Independent Controller (IC - IC3Net w/o Comm and IR): Like IRIC except the agents are trained with a global average reward instead of individual rewards. This will help us understand the credit assignment issue prevalent in CommNet.
CommNet: Introduced in Sukhbaatar et al. (2016), CommNet allows communication between agents over a channel where an agent is provided with the average of hidden state representations of other agents as a communication signal. Like IC3Net, CommNet also uses continuous signals to communicate between the agents. Thus, CommNet can be considered as IC3Net without both the gating action (gt) and individualized rewards.
4.3.2 RESULTS
We discuss major results for our experiments in this section and analyze particular behaviors/patterns of agents in Appendix 6.2.
Predator Prey: Table 1 (left) shows average steps taken by the models to complete an episode i.e. find the prey in mixed setting (we found similar results for cooperative setting shown in appendix). IC3Net reaches prey faster than the baselines as we increase the number of agents as well as the size of the maze. In 20×20 version, the gap in average steps is almost 24 steps, which is a substantial improvement over baselines. Figure 4 (right) shows the scalability graph for IC3Net and CommNet which supports the claim that with the increasing number of agents, IC3Net converges faster at a
better optimum than CommNet. Through these results on the PP task, we can see that compared to IC3Net, CommNet doesn’t work well in mixed scenarios. Finally, Figure 4 (left) shows the training plot of 20×20 grid with 10 agents trying to find a prey. The plot clearly shows the faster performance improvement of IC3Net in contrast to CommNet which takes long time to achieve a minor jump. We also find same pattern of the gating action values as in 4.2.
Traffic Junction: Table 1 (right) shows the success ratio for traffic junction. We fixed the gating action to 1 for TJ as discussed in 4.2. With zero vision, it is not possible to perform well without communication as evident by the results of IRIC and IC. Interestingly, IC performs better than IRIC in the hard case, as we believe without communication, the global reward in TJ acts as a better indicator of the overall performance. On the other hand, with communication and better knowledge of others, the global reward training face a credit assignment issue which is alleviated by IC3Net as evident by its superior performance compared to CommNet. In Sukhbaatar et al. (2016), well-performing agents in the medium and hard versions had vision > 0. With zero vision, IC3Net is to CommNet and IRIC with a performance gap greater than 30%. This verifies that individualized rewards in IC3Net help achieve a better or similar performance than CommNet in fully-cooperative tasks with communication due to a better credit assignment.
StarCraft: Table 2 displays win % and the average number of steps taken to complete an episode in StarCraft explore and combat tasks. We specifically test on (i) Explore task: 10 medics finding 1 enemy medic on 50×50 cell grid (ii) On 75×75 cell grid (iii) Combat task: 10 Marines vs 3 Zealots on 50 x 50 cell grid. Maximum steps in an episode are set to 60. The results on the explore task are similar to Predator-Prey as IC3Net outperforms the baselines. Moving to a bigger map size, we still see the performance gap even though performance drops for all the models.
On the combat task, IC3Net performs comparably well to CommNet. A detailed analysis on IC3Net’s performance in StarCraft tasks is provided in Appendix 6.2.1. To confirm that 10 marines vs 3 zealots is hard to win, we run an experi-
ment on reverse scenario where our agents control 3 Zealots initialized separately and enemies are 10 marines initialized together. We find that both IRIC and IC3Net reach a success percentage of 100% easily. We find that even in this case, IC3Net converges faster than IRIC.
5 CONCLUSIONS AND FUTURE WORK
In this work, we introduced IC3Net which aims to solve multi-agent tasks in various cooperation settings by learning when to communicate. Its continuous communication enables efficient training by backpropagation, while the discrete gating trained by reinforcement learning along with individual rewards allows it to be used in all scenarios and on larger scale.
Through our experiments, we show that IC3Net performs well in cooperative, mixed or competitive settings and learns to communicate only when necessary. Further, we show that agents learn to stop communication in competitive cases. We show scalability of our network by further experiments. In future, we would like to explore possibility of having multi-channel communication where agents can decide on which channel they want to put their information similar to communication groups but dynamic. It would be interesting to provide agents a choice of whether to listen to communication from a channel or not.
Acknowledgements Authors would like to thank Zeming Lin for his consistent support and suggestions around StarCraft and TorchCraft.
6 APPENDIX
6.1 TRAINING DETAILS
We set the hidden layer size to 128 units and we use LSTM (Hochreiter & Schmidhuber, 1997) with recurrence for all of the baselines and IC3Net. We use RMSProp (Tieleman & Hinton, 2012) with initial learning rate as a tuned hyper-parameter. All of the models use skip-connections (He et al., 2016). The training is distributed over 16 cores and each core runs a mini-batch till total episodes steps are 500 or more. We do 10 weight updates per epoch. We run predator-prey, StarCraft experiments for 1000 epochs, traffic junction experiment for 2000 epochs and report the final results. In mixed case, we report the mean score of all agents, while in cooperative case we report any agent’s score as they are same. We implement our model using PyTorch and environments using Gym (Brockman et al., 2016).We use REINFORCE (Williams, 1992) to train our setup. We conduct 5 runs on each of the tasks to compile our results. The training time for different tasks varies; StarCraft tasks usually takes more than a day (depends on number of agents and enemies), while predator-prey and traffic junction tasks complete under 12 hours.
6.2 RESULTS ANALYSIS
In this section, we analyze and discuss behaviors/patterns in the results on our experiments.
6.2.1 IC3NET IN STARCRAFT-COMBAT TASK
As observed in Table 2, IC3Net performs better than CommNet in explore task but doesn’t outperform it on Combat task. Our experiments and visualizations of actual strategy suggested that compared to exploration, combat can be solved far easily if the units learn to stay together. Focused firepower with more attack quantity in general results in quite good results on combat. We verify this hypothesis by running a heuristics baseline “attack closest” in which agents have full vision on map and have macro actions available3. By attacking the closest available enemy together the agents are able to kill zealots with success ratio of 76.6± 8 calculated over 5 runs, even though initialized separately. Also, as described in Appendix 6.5.2, the global reward in case of win in Combat task is relatively huge compared to the individual rewards for killing other units. We believe that with coordination to stay together, huge global rewards and focus fire–which is achievable through simple cooperation–add up to CommNet’s performance in this task.
Further, in exploration we have seen that agents go in separate direction and have individual rewards/sense of exploration which usually leads to faster exploration of an unexplored area. Thinking in simple terms, exploration of an house would be faster if different people handle different rooms. Achieving this is hard in CommNet because global rewards don’t exactly tell your individual contributions if you had explored separately. Also in CommNet, we have observed that agents follow a pattern where they get together at a point and explore together from that point which further signals that using CommNet, it is easy to get together for agents4.
6.2.2 VARIANCE IN IC3NET
In Figure 5, we have observed significant variance in IC3Net results for StarCraft. We performed a lot of experiments on StarCraft and can attribute the significant variance to stochasticity in the environment. There are a huge number of possible states in which agents can end up due to millions of possible interactions and their results in StarCraft. We believe it is hard to learn each one of them. This stochasticity variance can even be seen in simple heuristics baselines like “attack closest” (6.2.1) and is in-fact an indicator of how difficult is it to learn real-world scenarios which also have
3Macro-actions corresponds to “right click” feature in StarCraft and Dota in which a unit can be called to attack on other unit where units follows the shortest path on map towards the unit to be attacked and once reached starts attacking automatically, this essentially overpowers “attack closest” baseline to easily attack anyone under full-vision without any exploration.
4You can observe the above stated pattern for CommNet in PP in this video: https://gfycat.com/IllustriousMarvelousKagu. This video has been generated using trained CommNet model on PP-Hard. Here Red ‘X’ are predators and ‘P’ is the prey to be found. We can observe the pattern where the agents get together to find the prey leading to slack eventually
same amount of stochasticity. We believe that we don’t see similar variance in CommNet and other baselines because adding gating action increases the action-state-space combinations which yields better results while being difficult to learn sometimes. Further, this variance is only observed in higher Win % models which requires to learn more state spaces.
6.2.3 COMMNET IN STARCRAFT-EXPLORE TASKS
In Table 2, we can observe that CommNet performs worse than IRIC and IC in case of StarCraftExplore task. In this section, we provide a hypothesis for this result. First, we need to notice is that IRIC is better than IC also overall, which points to the fact that individualized reward are better than global rewards in case of exploration. This makes sense because if agents cover more area and know how much they covered through their own contribution (individual reward), it should lead to overall more coverage, compared to global rewards where agents can’t figure out their own coverage but instead overall one. Second, in case of CommNet, it is easy to communicate and get together. We observe this pattern in CommNet4 where agents first get together at a point and then start exploring from there which leads to slow exploration, but IC is better in this respect because it is hard to gather at single point which inherently leads to faster exploration than CommNet. Third, the reward structure in the case of mixed scenario doesn’t appreciate searching together which is not directly visible to CommNet and IC due to global rewards.
6.3 DETAILS OF PREDATOR PREY
In all the three settings, cooperative, competitive and mixed, a predator agent gets a constant time-step penalty rexplore = −0.05, until it reaches the prey. This makes sure that agent doesn’t slack in finding the prey. In the mixed setting, once an agent reaches the prey, the agent always gets a positive reward rprey = 0.05 which doesn’t depend on the number of agents on prey. . Similarly, in the cooperative setting, an agent gets a positive reward of rcoop = rprey * n, and in the competitive setting, an agent gets a positive reward of rcomp = rprey / n after it reaches the prey, where n is the number of agents on the prey. The total reward at time t for an agent i can be written as:
rppi (t) = δi ∗ rexplore + (1− δi) ∗ n λ t ∗ rprey ∗ |λ|
where δi denotes whether agent i has found the prey or not, nt is number of agents on prey at time-step t and λ is -1, 0 and 1 in the competitive, mixed and cooperative scenarios respectively. Maximum episode steps are set to 20, 40 and 80 for 5×5, 10×10 and 20×20 grids respectively. The number of predators are 5, 10 and 20 in 5×5, 10×10 and 20×20 grids respectively. Each predator can take one of the five basic movement actions i.e. up, down, left, right or stay. Predator, prey and all locations on grid are considered unique classes in vocabulary and are represented as one-hot binary vectors. Observation obs, at each point will be the sum of all one-hot binary vectors of location, predators and prey present at that point. With vision of 1, observation of each agent have dimension 32 × |obs|.
6.3.1 EXTRA EXPERIMENTS
Table 3 shows the results for IC3Net and baselines in the cooperative scenario for the predator-prey environment. As the cooperative reward function provides more reward after a predator reaches the prey, the comparison is provided for rewards instead of average number of steps. IC3Net performs better or equal to CommNet and other baselines in all three difficulty levels. The performance gap closes in and increases as we move towards bigger grids which shows that IC3Net is more scalable
due to individualized rewards. More importantly, even with the extra gating action training, IC3Net can perform comparably to CommNet which is designed for cooperative scenarios which suggests that IC3Net is a suitable choice for all cooperation settings.
To analyze the effect of gating action on rewards in case of mixed scenario where individualized rewards alone can help a lot, we test Predator Prey mixed cooperation setting on 20x20 grid on a baseline in which we set gating action to 1 (global communication) and uses individual rewards (IC2Net/CommNet + IR). We find average max steps to be 50.24± 3.4 which is lower than IC3Net. This means that (i) individualized rewards help a lot in mixed scenarios by allowing agents to understand there contributions (ii) adding the gating action in this case has an overhead but allows the same model to work in all settings (even competitive) by “learning to communicate” which is more close to real-world humans with a negligible hit on the performance.
6.4 DETAILS OF TRAFFIC JUNCTION
Traffic junction’s observation vocabulary has one-hot vectors for all locations in the grid and car class. Each agent observes its previous action, route identifier and a vector specifying sum of one-hot vectors for all classes present at that agent’s location. Collision occurs when two cars are on same location. We set maximum number of steps to 20, 40 and 60 in easy, medium and hard difficulty respectively. Similar to Sukhbaatar et al. (2016), we provide a negative reward rcoll = -10 on collision. To cut off traffic jams, we provide a negative reward τirtime = -0.01 τi where τi is time spent by the agent in the junction at time-step t. Reward for ith agent which is having Cti collisions at time-step t can be written as:
rtji (t) = rcollC t i + rtimeτi
We utilized curriculum learning Bengio et al. (2009) to make the training process easier. The parrive is kept at the start value till the first 250 epochs and is then linearly increased till the end value during the course from 250th to 1250th epoch. The start and end values of parrive for different difficulty levels are indicated in Table 4. Finally, training continues for another 750 epochs. The learning rate is fixed at 0.003 throughout. We also implemented three difficulty variations of the game explained as follows.
The easy version is a junction of two one-way roads on a 7× 7 grid. There are two arrival points, each with two possible routes and with a Ntotal value of 5.
The medium version consists of two connected junctions of two-way roads in 14 × 14 as shown in Figure 2 (right). There are 4 arrival points and 3 different routes for each arrival point and have Ntotal = 20.
The harder version consists of four connected junctions of two-way roads in 18 × 18 as shown in Figure 6. There are 8 arrival points and 7 different routes for each arrival point and have Ntotal = 20.
6.4.1 IRIC AND IC PERFORMANCE
In Table 1, we notice that IRIC and IC perform worst in medium level compared to the hard level. Our visualizations suggest that this is due to high final add-rate in case of medium version compared to hard version. Collisions happen much more often in medium version leading to less success rate (an episode is considered failure if a collision happens) compared to hard where initial add-rate is low to accommodate curriculum learning for hard version’s big grid size. The final add-rate in case of hard level is comparatively low to make sure that it is possible to pass a junction without a collision as with more entry points it is easy to collide even with a small add-rate.
6.5 STARCRAFT DETAILS
6.5.1 OBSERVATION AND ACTIONS
Explore: To complete the explore task, agents must be within a particular range of enemy unit called explore vision. Once an agent is within explore vision of enemy unit, we noop further actions. The reward structure is same as the PP task with only difference being that an agent needs to be within the explore vision range of the enemy unit instead of being on same location to get a non-negative reward. We use medic units which don’t attack enemy units. This ensures that we can simulate our explore task without any of kind of combat happening and interfering with the goal of the task. Observation for each agent is its (absolute x, absolute y) and enemy’s (relative x, relative y, visible) where visible, relative x and relative y are 0 when enemy is not in explore vision range. Agents have 9 actions to choose from which includes 8 basic directions and one stay action.
Combat: Agent observes its own (absolute x, absolute y, healthpoints + shield, weapon cooldown, previous action) and (relative x, relative y, visible, healthpoints + shield, weapon cooldown) for each of the enemies. relative x and relative y are only observed when enemy is visible which is corresponded by visible flag. All of the observations are normalized to be in between (0, 1). Agent has to choose from 9 + m actions which include 9 basic actions and 1 action for attacking each of the m agents. Attack actions only work when the enemy is within the sight range of the agent, otherwise it is a noop. In combat, we don’t compare with prior work on StarCraft because our environment setting is much harder, restrictive, new and different, thus, not directly comparable.
6.5.2 COMBAT REWARD
To avoid slack in finding the enemy team, we provide a negative reward rtime = -0.01 at each timestep when the agent is not involved in a combat. At each timestep, an agent gets as reward the difference between (i) its normalized health in current and previous timestep (ii) normalized health at previous timestep and current timestep for each of the enemies it has attacked till now. At the end of the episode, terminal reward for each agents consists of (i) all its remaining health * 3 as negative reward (ii) 5 * m + all its remaining health * 3 as positive reward if agents win (iii) normalized remaining health * 3 for all of the alive enemies as negative reward on lose. In this task, the group of enemies is initialized together randomly in one half of the map and our agents are initialized separately in other half which makes task even harder, thus requiring communication. For an automatic way of individualizing rewards, please refer to Foerster et al. (2018).
6.5.3 EXAMPLE SEQUENCE OF STATES IN COOPERATIVE EXPLORE MODE
We provide an example sequence of states in StarCraft cooperative explore mode in Figure 7. As soon as one of the agents finds the enemy unit, the other agents get the information about enemy’s location through communication and are able to reach it faster. | 1. What is the main contribution of the paper regarding learning architecture?
2. What are the strengths of the proposed method, particularly in terms of individual agent decision-making?
3. Are there any concerns or suggestions regarding the presentation of the methodology and experiments?
4. How does the reviewer assess the clarity and reproducibility of the paper's content?
5. What are the weaknesses of the paper regarding the experimental results and comparisons with other works? | Review | Review
From a methodological perspective, this paper describes a simple bu clever learning architecture with individual agents able to decide when to communicate through a learned gating mechanism. Each agent is an LSTM able to decide at each time point which aspects of its internal state should be exposed to other agents through this gating mechanism. The presentation of this method is clear to a level that should allows the reader to implement this him/herself. It would be great if the code associated to this could be released but the presentation allows for reproducibility.
The experiments are interesting as well. Experimental results are presented on 3 problems and compared with known baselines from the academic community. The obtained results do show the merit of the approach. That being said, while the experimental results are extensive, there are places that could benefit from more clarity. For instance, I have found section 4.2 a bit dry. For instance, I had to read the plots caption and the text several times to map get at the deductions made in 4.2. Given the importance of gating in this work, I recommend expanding on this a bit (if space allows it). Small note: in the caption for Figure 3, on the fourth line, did you mean (f) instead of (d) when arguing that agents stop communicating once they reach the prey ( or am I missing something here)? Also, would it be possible to provide more insights on why IC3Net is doing better than CommNet except for the Combat-10Mv3Ze task (last table before the conclusion, what makes this task harder for IC3Net)? Another observation is on the variance terms that are reported for IC3Net. They are often (not always but definitely in the last table before the conclusion) quite higher when compared to the values associated with the baselines. Can this be explained? Another small thing: please add captions to your tables (at least a table number; I think that Table 2 does not have a caption).
Overall, the paper is well written, interesting. Addressing the questions raised above would definitely help me and probably the eventual readers better appreciate its quality. |
ICLR | Title
Learning when to Communicate at Scale in Multiagent Cooperative and Competitive Tasks
Abstract
Learning when to communicate and doing that effectively is essential in multi-agent tasks. Recent works show that continuous communication allows efficient training with back-propagation in multiagent scenarios, but have been restricted to fullycooperative tasks. In this paper, we present Individualized Controlled Continuous Communication Model (IC3Net) which has better training efficiency than simple continuous communication model, and can be applied to semi-cooperative and competitive settings along with the cooperative settings. IC3Net controls continuous communication with a gating mechanism and uses individualized rewards for each agent to gain better performance and scalability while fixing credit assignment issues. Using variety of tasks including StarCraft BroodWarsTM explore and combat scenarios, we show that our network yields improved performance and convergence rates than the baselines as the scale increases. Our results convey that IC3Net agents learn when to communicate based on the scenario and profitability.
1 INTRODUCTION
Communication is an essential element of intelligence as it helps in learning from others experience, work better in teams and pass down knowledge. In multi-agent settings, communication allows agents to cooperate towards common goals. Particularly in partially observable environments, when the agents are observing different parts of the environment, they can share information and learnings from their observation through communication.
Recently, there have been a lot of success in the field of reinforcement learning (RL) in playing Atari Games (Mnih et al., 2015) to playing Go (Silver et al., 2016), most of which have been limited to the single agent domain. However, the number of systems and applications having multi-agents have been growing (Lazaridou et al., 2017; Mordatch & Abbeel, 2018); where size can be from a team of robots working in manufacturing plants to a network of self-driving cars. Thus, it is crucial to successfully scale RL to multi-agent environments in order to build intelligent systems capable of higher productivity. Furthermore, scenarios other than cooperative, namely semi-cooperative (or mixed) and competitive scenarios have not even been studied as extensively for multi-agent systems.
The mixed scenarios can be compared to most of the real life scenarios as humans are cooperative but not fully-cooperative in nature. Humans work towards their individual goals while cooperating with each other. In competitive scenarios, agents are essentially competing with each other for better rewards. In real life, humans always have an option to communicate but can choose when to actually communicate. For example, in a sports match two teams which can communicate, can choose to not communicate at all (to prevent sharing strategies) or use dishonest signaling (to misdirect opponents) (Lehman et al., 2018) in order to optimize their own reward and handicap opponents; making it important to learn when to communicate. ∗Equal contribution. †Current affiliation. This work was completed when authors were at New York University.
Teaching agents how to communicate makes it is unnecessary to hand code the communication protocol with expert knowledge (Sukhbaatar et al., 2016)(Kottur et al., 2017). While the content of communication is important, it is also important to know when to communicate either to increase scalability and performance or to increase competitive edge. For example, a prey needs to learn when to communicate to avoid communicating its location with predators.
Sukhbaatar et al. (2016) showed that agents communicating through a continuous vector are easier to train and have a higher information throughput than communication based on discrete symbols. Their continuous communication is differentiable, so it can be trained efficiently with back-propagation. However, their model assumes full-cooperation between agents and uses average global rewards. This restricts the model from being used in mixed or competitive scenarios as full-cooperation involves sharing hidden states to everyone; exposing everything and leading to poor performance by all agents as shown by our results. Furthermore, the average global reward for all agents makes the credit assignment problem even harder and difficult to scale as agents don’t know their individual contributions in mixed or competitive scenarios where they want themselves to succeed before others.
To solve above mentioned issues, we make the following contributions:
1. We propose Individualized Controlled Continuous Communication Model (IC3Net), in which each agent is trained with its individualized reward and can be applied to any scenario whether cooperative or not.
2. We empirically show that based on the given scenario–using the gating mechanism–our model can learn when to communicate. The gating mechanism allows agents to block their communication; which is useful in competitive scenarios.
3. We conduct experiments on different scales in three chosen environments including StarCraft and show that IC3Net outperforms the baselines with performance gaps that increase with scale. The results show that individual rewards converge faster and better than global rewards.
2 RELATED WORK
The simplest approach in multi-agent reinforcement learning (MARL) settings is to use an independent controller for each agent. This was attempted with Q-learning in Tan (1993). However, in practice it performs poorly (Matignon et al., 2012), which we also show in comparison with our model. The major issue with this approach is that due to multiple agents, the stationarity of the environment is lost and naïve application of experience replay doesn’t work well.
The nature of interaction between agents can either be cooperative, competitive, or a mix of both. Most algorithms are designed only for a particular nature of interaction, mainly cooperative settings (Omidshafiei et al., 2017; Lauer & Riedmiller, 2000; Matignon et al., 2007), with strategies which indirectly arrive at cooperation via sharing policy parameters (Gupta et al., 2017). These algorithms are generally not applicable in competitive or mixed settings. See Busoniu et al. (2008) for survey of MARL in general and Panait & Luke (2005) for survey of cooperative multi-agent learning.
Our work can be considered as an all-scenario extension of Sukhbaatar et al. (2016)’s CommNet for collaboration among multiple agents using continuous communication; usable only in cooperative settings as stated in their work and shown by our experiments. Due to continuous communication, the controller can be learned via backpropagation. However, this model is restricted to fully cooperative tasks as hidden states are fully communicated to others which exposes everything about agent. On the other hand, due to global reward for all agents, CommNet also suffers from credit assignment issue.
The Multi-Agent Deep Deterministic Policy Gradient (MADDPG) model presented by Lowe et al. (2017) also tries to achieve similar goals. However, they differ in the way of providing the coordination signal. In their case, there is no direct communication among agents (actors with different policy per agent), instead a different centralized critic per agent – which can access the actions of all the agents – provides the signal. Concurrently, a similar model using centralized critic and decentralized actors with additional counterfactual reward, COMA by Foerster et al. (2018) was proposed to tackle the challenge of multiagent credit assignment by letting agents know their individual contributions.
Vertex Attention Interaction Networks (VAIN) (Hoshen, 2017) also models multi-agent communication through the use of Interaction Networks (Battaglia et al., 2016) with attention mechanism (Bahdanau et al., 2015) for predictive modelling using supervised settings. The work by Foerster
et al. (2016b) also learns a communication protocol where agents communicate in a discrete manner through their actions. This contrasts with our model where multiple continuous communication cycles can be used at each time step to decide the actions of all agents. Furthermore, our approach is amenable to dynamic number of agents. Peng et al. (2017) also attempts to solve micromanagement tasks in StarCraft using communication. However, they have non-symmetric addition of agents in communication channel and are restricted to only cooperative scenarios.
In contrast, a lot of work has focused on understanding agents’ communication content; mostly in discrete settings with two agents (Wang et al., 2016; Havrylov & Titov, 2017; Kottur et al., 2017; Lazaridou et al., 2017; Lee et al., 2018). Lazaridou et al. (2017) showed that given two neural network agents and a referential game, the agents learn to coordinate. Havrylov & Titov (2017) extended this by grounding communication protocol to a symbols’s sequence while Kottur et al. (2017) showed that this language can be made more human-like by placing certain restrictions. Lee et al. (2018) demonstrated that agents speaking different languages can learn to translate in referential games.
3 MODEL
In this section, we introduce our model Individualized Controlled Continuous Communication Model (IC3Net) as shown in Figure 1 to work in multi-agent cooperative, competitive and mixed settings where agents learn what to communicate as well as when to communicate.
First, let us describe an independent controller model where each agent is controlled by an individual LSTM. For the j-th agent, its policy takes the form of:
ht+1j , s t+1 j = LSTM(e(o t j), h t j , s t j)
atj = π(h t j),
where otj is the observation of the j-th agent at time t, e(·) is an encoder function parameterized by a fully-connected neural network and π is an agent’s action policy. Also, htj and s t j are the hidden and cell states of the LSTM. We use the same LSTM model for all agents, sharing their parameters. This way, the model is invariant to permutations of the agents.
IC3Net extends this independent controller model by allowing agents to communicate their internal state, gated by a discrete action. The policy of the j-th agent in a IC3Net is given by
gt+1j = f g(htj)
ht+1j , s t+1 j = LSTM(e(o t j) + c t j , h t j , s t j)
ct+1j = 1
J − 1 C ∑ j′ 6=j ht+1j′ g t+1 j′
atj = π(h t j),
where ctj is the communication vector for the j-th agent, C is a linear transformation matrix for transforming gated average hidden state to a communication tensor, J is the number of alive agents currently present in the system and fg(.) is a simple network containing a soft-max layer for 2 actions (communicate or not) on top of a linear layer with non-linearity. The binary action gtj specifies whether agent j wants to communicate with others, and act as a gating function when calculating the communication vector. Note that the gating action for next time-step is calculated at current time-step. We train both the action policy π and the gating function fg with REINFORCE (Williams, 1992).
In Sukhbaatar et al. (2016), individual networks controlling agents were interconnected, and they as a whole were considered as a single big neural network. This single big network controller approach required a definition of an unified loss function during training, thus making it impossible to train agents with different rewards.
In this work, however, we move away from the single big network controller approach. Instead, we consider multiple big networks with shared parameters each controlling a single agent separately. Each big network consists of multiple LSTM networks, each processing an observation of a single agent. However, only one of the LSTMs need to output an action because the big network is only controlling a single agent. Although this view has a little effect on the implementation (we can still use a single big network in practice), it allows us to train each agent to maximize its individual reward instead of a single global reward. This has two benefits: (i) it allows the model to be applied to both cooperative and competitive scenarios, (ii) it also helps resolve the credit assignment issue faced by many multi-agent (Sukhbaatar et al., 2016; Foerster et al., 2016a) algorithms while improving performance with scalability and is coherent with the findings in Chang et al. (2003).
4 EXPERIMENTS1
We study our network in multi-agent cooperative, mixed and competitive scenarios to understand its workings. We perform experiments to answer following questions:
1. Can our network learn the gating mechanism to communicate only when needed according to the given scenario? Essentially, is it possible to learn when to communicate?
2. Does our network using individual rewards scales better and faster than the baselines? This would clarify, whether or not, individual rewards perform better than global rewards in multi-agent communication based settings.
We first analyze gating action’s (gt) working. Later, we train our network in three chosen environments with variations in difficulty and coordination to ensure scalability and performance.
4.1 ENVIRONMENTS
We consider three environments for our analysis and experiments. (i) a predator-prey environment (PP) where predators with limited vision look for a prey on a square grid. (ii) a traffic junction environment (TJ) similar to Sukhbaatar et al. (2016) where agents with limited vision must learn to communicate in order to avoid collisions. (iii) StarCraft BroodWars2 (SC) explore and combat
1The code is available at https://github.com/IC3Net/IC3Net. 2StarCraft is a trademark or registered trademark of Blizzard Entertainment, Inc., in the U.S. and/or other countries. Nothing in this paper should not be construed as approval, endorsement, or sponsorship by Blizzard Entertainment, Inc
tasks which test control on multiple agents in various scenarios where agent needs to understand and decouple observations for multiple opposing units.
4.1.1 PREDATOR PREY
In this task, we have n predators (agents) with limited vision trying to find a stationary prey. Once a predator reaches a prey, it stays there and always gets a positive reward, until end of episode (rest of the predators reach prey, or maximum number of steps). In case of zero vision, agents don’t have a direct way of knowing prey’s location unless they jump on it.
We design three cooperation settings (competitive, mixed and cooperative) for this task with different reward structures to test our network. See Appendix 6.3 for details on grid, reward structure, observation and action space. There is no loss or benefit from communicating in mixed scenario. In competitive setting, agents get lower rewards if other agents reach the prey and in cooperative setting, reward increases as more agents reach the prey. We compare with baselines using mixed settings in subsection 4.3.2 while explicitly learning and analyzing gating action’s working in subsection 4.2.
We create three levels for this environment – as mentioned in Appendix 6.3 – to compare our network’s performance with increasing number of agents and grid size. 10×10 grid version with 5 agents is shown in Figure 2 (left). All agents are randomly placed in the grid at start of an episode.
4.1.2 TRAFFIC JUNCTION
Following Sukhbaatar et al. (2016), we test our model on the traffic junction task as it is a good proxy for testing whether communication is working. This task also helps in supporting our claim that IC3Net provides good performance and faster convergence in fully-cooperative scenarios similar to mixed ones. In the traffic junction, cars enter a junction from all entry points with a probability parr. The maximum number of cars at any given time in the junction is limited. Cars can take two actions at each time-step, gas and brake respectively. The task has three difficulty levels (see Figure 2) which vary in the number of possible routes, entry points and junctions. We make this task harder by always setting vision to zero in all the three difficulty levels to ensure that task is not solvable without communication. See Appendix 6.4 for details on reward structure, observation and training.
4.1.3 STARCRAFT: BROODWARS
To fully understand the scalability of our architecture in more realistic and complex scenarios, we test it on StarCraft combat and exploration micro-management tasks in partially observable settings. StarCraft is a challenging environment for RL because it has a large observation-action space, many different unit types and stochasticity. We train our network on Combat and Explore task. The task’s difficulty can be altered by changing the number of our units, enemy units and the map size.
By default, the game has macro-actions which allow a player to directly target an enemy unit which makes player’s unit find the best possible path using the game’s in-built path-finding system, move towards the target and attack when it is in a range. However, we make the task harder by (i) removing macro-actions making exploration harder (ii) limiting vision making environment partially observable(iii) unlike previous works (Wender & Watson, 2012; Ontanón et al., 2013; Usunier et al., 2017; Peng et al., 2017), initializing enemy and our units at random locations in a fixed size square on the map, which makes it challenging to find enemy units. Refer to Appendix 6.5.1 for reward, action, observation and task details. We consider two types of tasks in StarCraft:
Explore: In this task, we have n agents trying to explore the map and find an enemy unit. This is a direct scale-up of the PP but with more realistic and stochastic situations.
Combat: We test an agent’s capability to execute a complex task of combat in StarCraft which require coordination between teammates, exploration of a terrain, understanding of enemy units and formalism of complex strategies. We specifically test a team of n agents trying to find and kill a team of m enemies in a partially observable environment similar to the explore task. The agents, with their limited vision, must find the enemy units and kill all of them to score a win. More information on reward structure, observation and setup can be found in Appendix 6.5.1 and 6.5.2.
4.2 ANALYSIS OF THE GATING MECHANISM
We analyze working of gating action (gt) in IC3Net by using cooperative, competitive and mixed settings in Predator-Prey (4.1.1) and StarCraft explore tasks (4.1.3). However, this time the enemy unit (prey) shares parameters with the predators and is trained with them. All of the enemy unit’s actions are noop which makes it stationary. The enemy unit gets a positive reward equivalent to rtime = 0.05 per timestep until no predator/medic is captures it; after that it gets a reward of 0.
For 5×5 grid in PP task, Figure 3 shows gating action (averaged per epoch) in all scenarios for (i) communication between predator and (ii) communication between prey and predators. We also test
on 50×50 map size for competitive and cooperative StaraCraft explore task and found similar results (Fig. 3d). We can deduce following observations:
• As can be observed in Figure 3a, 3b, 3c and 3d, in all the four cases, the prey learns not to communicate. If the prey communicates, predators will reach it faster. Since it will get 0 reward when an agent comes near or on top of it, it doesn’t communicate to achieve higher rewards. • In cooperative setting (Figure 3a, 3e), the predators are openly communicating with g close to
1. Even though the prey communicates with the predators at the start, it eventually learns not to communicate; so as not to share its location. As all agents are communicating in this setting, it takes more training time to adjust prey’s weights towards silence. Our preliminary tests suggest that in cooperative settings, it is beneficial to fix the gating action to 1.0 as communication is almost always needed and it helps in faster training by skipping the need to train the gating action. • In the mixed setting (Figure 3b), agents don’t always communicate which corresponds to the fact
that there is no benefit or loss by communicating in mixed scenario. The prey is easily able to learn not to communicate as the weights for predators are also adjusted towards non-cooperation from the start itself. • As expected due to competition, predators rarely communicate in competitive setting (Figure 3c,
3d). Note that, this setting is not fully-adversarial as predators can initially explore faster if they communicate which can eventually lead to overall higher rewards. This can be observed as the agents only communicate while it’s profitable for them, i.e. before reaching the prey (Figure 3f)) as communicating afterwards can impact their future rewards.
Experiments in this section, empirically suggest that agents can “learn to communicate when it is profitable”; thus allowing same network to be used in all settings.
4.3 SCALABILITY AND GENERALIZATION EXPERIMENTS
In this section, we look at bigger versions of our environments to understand scalability and generalization aspects of IC3Net.
4.3.1 BASELINES
For training details, refer to Appendix 6.1. We compare IC3Net with baselines specified below in all scenarios.
Individual Reward Independent Controller (IRIC): In this controller, model is applied individually to all of the agents’ observations to produce the action to be taken. Essentially, this can be seen as IC3Net without any communication between agents; but with individualized reward for each agent. Note that no communication makes gating action (gt) ineffective.
Independent Controller (IC - IC3Net w/o Comm and IR): Like IRIC except the agents are trained with a global average reward instead of individual rewards. This will help us understand the credit assignment issue prevalent in CommNet.
CommNet: Introduced in Sukhbaatar et al. (2016), CommNet allows communication between agents over a channel where an agent is provided with the average of hidden state representations of other agents as a communication signal. Like IC3Net, CommNet also uses continuous signals to communicate between the agents. Thus, CommNet can be considered as IC3Net without both the gating action (gt) and individualized rewards.
4.3.2 RESULTS
We discuss major results for our experiments in this section and analyze particular behaviors/patterns of agents in Appendix 6.2.
Predator Prey: Table 1 (left) shows average steps taken by the models to complete an episode i.e. find the prey in mixed setting (we found similar results for cooperative setting shown in appendix). IC3Net reaches prey faster than the baselines as we increase the number of agents as well as the size of the maze. In 20×20 version, the gap in average steps is almost 24 steps, which is a substantial improvement over baselines. Figure 4 (right) shows the scalability graph for IC3Net and CommNet which supports the claim that with the increasing number of agents, IC3Net converges faster at a
better optimum than CommNet. Through these results on the PP task, we can see that compared to IC3Net, CommNet doesn’t work well in mixed scenarios. Finally, Figure 4 (left) shows the training plot of 20×20 grid with 10 agents trying to find a prey. The plot clearly shows the faster performance improvement of IC3Net in contrast to CommNet which takes long time to achieve a minor jump. We also find same pattern of the gating action values as in 4.2.
Traffic Junction: Table 1 (right) shows the success ratio for traffic junction. We fixed the gating action to 1 for TJ as discussed in 4.2. With zero vision, it is not possible to perform well without communication as evident by the results of IRIC and IC. Interestingly, IC performs better than IRIC in the hard case, as we believe without communication, the global reward in TJ acts as a better indicator of the overall performance. On the other hand, with communication and better knowledge of others, the global reward training face a credit assignment issue which is alleviated by IC3Net as evident by its superior performance compared to CommNet. In Sukhbaatar et al. (2016), well-performing agents in the medium and hard versions had vision > 0. With zero vision, IC3Net is to CommNet and IRIC with a performance gap greater than 30%. This verifies that individualized rewards in IC3Net help achieve a better or similar performance than CommNet in fully-cooperative tasks with communication due to a better credit assignment.
StarCraft: Table 2 displays win % and the average number of steps taken to complete an episode in StarCraft explore and combat tasks. We specifically test on (i) Explore task: 10 medics finding 1 enemy medic on 50×50 cell grid (ii) On 75×75 cell grid (iii) Combat task: 10 Marines vs 3 Zealots on 50 x 50 cell grid. Maximum steps in an episode are set to 60. The results on the explore task are similar to Predator-Prey as IC3Net outperforms the baselines. Moving to a bigger map size, we still see the performance gap even though performance drops for all the models.
On the combat task, IC3Net performs comparably well to CommNet. A detailed analysis on IC3Net’s performance in StarCraft tasks is provided in Appendix 6.2.1. To confirm that 10 marines vs 3 zealots is hard to win, we run an experi-
ment on reverse scenario where our agents control 3 Zealots initialized separately and enemies are 10 marines initialized together. We find that both IRIC and IC3Net reach a success percentage of 100% easily. We find that even in this case, IC3Net converges faster than IRIC.
5 CONCLUSIONS AND FUTURE WORK
In this work, we introduced IC3Net which aims to solve multi-agent tasks in various cooperation settings by learning when to communicate. Its continuous communication enables efficient training by backpropagation, while the discrete gating trained by reinforcement learning along with individual rewards allows it to be used in all scenarios and on larger scale.
Through our experiments, we show that IC3Net performs well in cooperative, mixed or competitive settings and learns to communicate only when necessary. Further, we show that agents learn to stop communication in competitive cases. We show scalability of our network by further experiments. In future, we would like to explore possibility of having multi-channel communication where agents can decide on which channel they want to put their information similar to communication groups but dynamic. It would be interesting to provide agents a choice of whether to listen to communication from a channel or not.
Acknowledgements Authors would like to thank Zeming Lin for his consistent support and suggestions around StarCraft and TorchCraft.
6 APPENDIX
6.1 TRAINING DETAILS
We set the hidden layer size to 128 units and we use LSTM (Hochreiter & Schmidhuber, 1997) with recurrence for all of the baselines and IC3Net. We use RMSProp (Tieleman & Hinton, 2012) with initial learning rate as a tuned hyper-parameter. All of the models use skip-connections (He et al., 2016). The training is distributed over 16 cores and each core runs a mini-batch till total episodes steps are 500 or more. We do 10 weight updates per epoch. We run predator-prey, StarCraft experiments for 1000 epochs, traffic junction experiment for 2000 epochs and report the final results. In mixed case, we report the mean score of all agents, while in cooperative case we report any agent’s score as they are same. We implement our model using PyTorch and environments using Gym (Brockman et al., 2016).We use REINFORCE (Williams, 1992) to train our setup. We conduct 5 runs on each of the tasks to compile our results. The training time for different tasks varies; StarCraft tasks usually takes more than a day (depends on number of agents and enemies), while predator-prey and traffic junction tasks complete under 12 hours.
6.2 RESULTS ANALYSIS
In this section, we analyze and discuss behaviors/patterns in the results on our experiments.
6.2.1 IC3NET IN STARCRAFT-COMBAT TASK
As observed in Table 2, IC3Net performs better than CommNet in explore task but doesn’t outperform it on Combat task. Our experiments and visualizations of actual strategy suggested that compared to exploration, combat can be solved far easily if the units learn to stay together. Focused firepower with more attack quantity in general results in quite good results on combat. We verify this hypothesis by running a heuristics baseline “attack closest” in which agents have full vision on map and have macro actions available3. By attacking the closest available enemy together the agents are able to kill zealots with success ratio of 76.6± 8 calculated over 5 runs, even though initialized separately. Also, as described in Appendix 6.5.2, the global reward in case of win in Combat task is relatively huge compared to the individual rewards for killing other units. We believe that with coordination to stay together, huge global rewards and focus fire–which is achievable through simple cooperation–add up to CommNet’s performance in this task.
Further, in exploration we have seen that agents go in separate direction and have individual rewards/sense of exploration which usually leads to faster exploration of an unexplored area. Thinking in simple terms, exploration of an house would be faster if different people handle different rooms. Achieving this is hard in CommNet because global rewards don’t exactly tell your individual contributions if you had explored separately. Also in CommNet, we have observed that agents follow a pattern where they get together at a point and explore together from that point which further signals that using CommNet, it is easy to get together for agents4.
6.2.2 VARIANCE IN IC3NET
In Figure 5, we have observed significant variance in IC3Net results for StarCraft. We performed a lot of experiments on StarCraft and can attribute the significant variance to stochasticity in the environment. There are a huge number of possible states in which agents can end up due to millions of possible interactions and their results in StarCraft. We believe it is hard to learn each one of them. This stochasticity variance can even be seen in simple heuristics baselines like “attack closest” (6.2.1) and is in-fact an indicator of how difficult is it to learn real-world scenarios which also have
3Macro-actions corresponds to “right click” feature in StarCraft and Dota in which a unit can be called to attack on other unit where units follows the shortest path on map towards the unit to be attacked and once reached starts attacking automatically, this essentially overpowers “attack closest” baseline to easily attack anyone under full-vision without any exploration.
4You can observe the above stated pattern for CommNet in PP in this video: https://gfycat.com/IllustriousMarvelousKagu. This video has been generated using trained CommNet model on PP-Hard. Here Red ‘X’ are predators and ‘P’ is the prey to be found. We can observe the pattern where the agents get together to find the prey leading to slack eventually
same amount of stochasticity. We believe that we don’t see similar variance in CommNet and other baselines because adding gating action increases the action-state-space combinations which yields better results while being difficult to learn sometimes. Further, this variance is only observed in higher Win % models which requires to learn more state spaces.
6.2.3 COMMNET IN STARCRAFT-EXPLORE TASKS
In Table 2, we can observe that CommNet performs worse than IRIC and IC in case of StarCraftExplore task. In this section, we provide a hypothesis for this result. First, we need to notice is that IRIC is better than IC also overall, which points to the fact that individualized reward are better than global rewards in case of exploration. This makes sense because if agents cover more area and know how much they covered through their own contribution (individual reward), it should lead to overall more coverage, compared to global rewards where agents can’t figure out their own coverage but instead overall one. Second, in case of CommNet, it is easy to communicate and get together. We observe this pattern in CommNet4 where agents first get together at a point and then start exploring from there which leads to slow exploration, but IC is better in this respect because it is hard to gather at single point which inherently leads to faster exploration than CommNet. Third, the reward structure in the case of mixed scenario doesn’t appreciate searching together which is not directly visible to CommNet and IC due to global rewards.
6.3 DETAILS OF PREDATOR PREY
In all the three settings, cooperative, competitive and mixed, a predator agent gets a constant time-step penalty rexplore = −0.05, until it reaches the prey. This makes sure that agent doesn’t slack in finding the prey. In the mixed setting, once an agent reaches the prey, the agent always gets a positive reward rprey = 0.05 which doesn’t depend on the number of agents on prey. . Similarly, in the cooperative setting, an agent gets a positive reward of rcoop = rprey * n, and in the competitive setting, an agent gets a positive reward of rcomp = rprey / n after it reaches the prey, where n is the number of agents on the prey. The total reward at time t for an agent i can be written as:
rppi (t) = δi ∗ rexplore + (1− δi) ∗ n λ t ∗ rprey ∗ |λ|
where δi denotes whether agent i has found the prey or not, nt is number of agents on prey at time-step t and λ is -1, 0 and 1 in the competitive, mixed and cooperative scenarios respectively. Maximum episode steps are set to 20, 40 and 80 for 5×5, 10×10 and 20×20 grids respectively. The number of predators are 5, 10 and 20 in 5×5, 10×10 and 20×20 grids respectively. Each predator can take one of the five basic movement actions i.e. up, down, left, right or stay. Predator, prey and all locations on grid are considered unique classes in vocabulary and are represented as one-hot binary vectors. Observation obs, at each point will be the sum of all one-hot binary vectors of location, predators and prey present at that point. With vision of 1, observation of each agent have dimension 32 × |obs|.
6.3.1 EXTRA EXPERIMENTS
Table 3 shows the results for IC3Net and baselines in the cooperative scenario for the predator-prey environment. As the cooperative reward function provides more reward after a predator reaches the prey, the comparison is provided for rewards instead of average number of steps. IC3Net performs better or equal to CommNet and other baselines in all three difficulty levels. The performance gap closes in and increases as we move towards bigger grids which shows that IC3Net is more scalable
due to individualized rewards. More importantly, even with the extra gating action training, IC3Net can perform comparably to CommNet which is designed for cooperative scenarios which suggests that IC3Net is a suitable choice for all cooperation settings.
To analyze the effect of gating action on rewards in case of mixed scenario where individualized rewards alone can help a lot, we test Predator Prey mixed cooperation setting on 20x20 grid on a baseline in which we set gating action to 1 (global communication) and uses individual rewards (IC2Net/CommNet + IR). We find average max steps to be 50.24± 3.4 which is lower than IC3Net. This means that (i) individualized rewards help a lot in mixed scenarios by allowing agents to understand there contributions (ii) adding the gating action in this case has an overhead but allows the same model to work in all settings (even competitive) by “learning to communicate” which is more close to real-world humans with a negligible hit on the performance.
6.4 DETAILS OF TRAFFIC JUNCTION
Traffic junction’s observation vocabulary has one-hot vectors for all locations in the grid and car class. Each agent observes its previous action, route identifier and a vector specifying sum of one-hot vectors for all classes present at that agent’s location. Collision occurs when two cars are on same location. We set maximum number of steps to 20, 40 and 60 in easy, medium and hard difficulty respectively. Similar to Sukhbaatar et al. (2016), we provide a negative reward rcoll = -10 on collision. To cut off traffic jams, we provide a negative reward τirtime = -0.01 τi where τi is time spent by the agent in the junction at time-step t. Reward for ith agent which is having Cti collisions at time-step t can be written as:
rtji (t) = rcollC t i + rtimeτi
We utilized curriculum learning Bengio et al. (2009) to make the training process easier. The parrive is kept at the start value till the first 250 epochs and is then linearly increased till the end value during the course from 250th to 1250th epoch. The start and end values of parrive for different difficulty levels are indicated in Table 4. Finally, training continues for another 750 epochs. The learning rate is fixed at 0.003 throughout. We also implemented three difficulty variations of the game explained as follows.
The easy version is a junction of two one-way roads on a 7× 7 grid. There are two arrival points, each with two possible routes and with a Ntotal value of 5.
The medium version consists of two connected junctions of two-way roads in 14 × 14 as shown in Figure 2 (right). There are 4 arrival points and 3 different routes for each arrival point and have Ntotal = 20.
The harder version consists of four connected junctions of two-way roads in 18 × 18 as shown in Figure 6. There are 8 arrival points and 7 different routes for each arrival point and have Ntotal = 20.
6.4.1 IRIC AND IC PERFORMANCE
In Table 1, we notice that IRIC and IC perform worst in medium level compared to the hard level. Our visualizations suggest that this is due to high final add-rate in case of medium version compared to hard version. Collisions happen much more often in medium version leading to less success rate (an episode is considered failure if a collision happens) compared to hard where initial add-rate is low to accommodate curriculum learning for hard version’s big grid size. The final add-rate in case of hard level is comparatively low to make sure that it is possible to pass a junction without a collision as with more entry points it is easy to collide even with a small add-rate.
6.5 STARCRAFT DETAILS
6.5.1 OBSERVATION AND ACTIONS
Explore: To complete the explore task, agents must be within a particular range of enemy unit called explore vision. Once an agent is within explore vision of enemy unit, we noop further actions. The reward structure is same as the PP task with only difference being that an agent needs to be within the explore vision range of the enemy unit instead of being on same location to get a non-negative reward. We use medic units which don’t attack enemy units. This ensures that we can simulate our explore task without any of kind of combat happening and interfering with the goal of the task. Observation for each agent is its (absolute x, absolute y) and enemy’s (relative x, relative y, visible) where visible, relative x and relative y are 0 when enemy is not in explore vision range. Agents have 9 actions to choose from which includes 8 basic directions and one stay action.
Combat: Agent observes its own (absolute x, absolute y, healthpoints + shield, weapon cooldown, previous action) and (relative x, relative y, visible, healthpoints + shield, weapon cooldown) for each of the enemies. relative x and relative y are only observed when enemy is visible which is corresponded by visible flag. All of the observations are normalized to be in between (0, 1). Agent has to choose from 9 + m actions which include 9 basic actions and 1 action for attacking each of the m agents. Attack actions only work when the enemy is within the sight range of the agent, otherwise it is a noop. In combat, we don’t compare with prior work on StarCraft because our environment setting is much harder, restrictive, new and different, thus, not directly comparable.
6.5.2 COMBAT REWARD
To avoid slack in finding the enemy team, we provide a negative reward rtime = -0.01 at each timestep when the agent is not involved in a combat. At each timestep, an agent gets as reward the difference between (i) its normalized health in current and previous timestep (ii) normalized health at previous timestep and current timestep for each of the enemies it has attacked till now. At the end of the episode, terminal reward for each agents consists of (i) all its remaining health * 3 as negative reward (ii) 5 * m + all its remaining health * 3 as positive reward if agents win (iii) normalized remaining health * 3 for all of the alive enemies as negative reward on lose. In this task, the group of enemies is initialized together randomly in one half of the map and our agents are initialized separately in other half which makes task even harder, thus requiring communication. For an automatic way of individualizing rewards, please refer to Foerster et al. (2018).
6.5.3 EXAMPLE SEQUENCE OF STATES IN COOPERATIVE EXPLORE MODE
We provide an example sequence of states in StarCraft cooperative explore mode in Figure 7. As soon as one of the agents finds the enemy unit, the other agents get the information about enemy’s location through communication and are able to reach it faster. | 1. What are the key contributions and novel aspects introduced by the paper in extending Sukbaatar et al.'s work?
2. What are the strengths of the paper regarding its methodology and experimental design?
3. Do you have any concerns or questions regarding the method and experiments, particularly in non-fully-cooperative environments and individual rewards?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Review | Review
This work is an extension to the work of Sukbaatar et al. (2016) with two main differences:
1) Selective communication: agents are able to decide whether they want to communicate.
2) Individualized reward: Agents receive individual rewards; therefore, agents are aware of their contribution towards the goal.
These two new extensions enable their model to work in either cooperative or a mix of competitive and competitive/collaborative settings. The authors also claim these two extensions enable their model to converge faster and better.
The paper is well written, easy to follow, and everything has been explained quite well. The experiments are competent in the sense that the authors ran their model in four different environments (predator and prey, traffic junction, StarCraft explore, and StarCraft combat). The comparison between their model with three baselines was extensive; they reported the mean and variance over different runs. I have some concerns regarding their method and the experiments which are brought up in the following:
Method:
In a non-fully-cooperative environment, sharing hidden state entirely as the only option for communicate is not very reasonable; I think something like sending a message is a better option and more realistic (e.g., something like the work of Mordatch & Abbeel, 2017)
Experiment:
The experiment "StarCraft explore" is similar to predator-prey; therefore, instead of explaining StarCraft explore, I would like to see how the model works in StarCraft combat. Right now, the authors explain a bit about the model performance in Starcraft combat, but I found the explanation confusing.
Authors provide 3 baselines:
1) no communication, but IR
2) no communication, no IR
3) global communication, no IR (commNet)
I think having a baseline that has global communication with IR can show the effect of selective communication better.
There are some questions in the experiment section that have not been addressed very well. For example:
Is there any difference between the results of table 1, if we look at the cooperative setup?
Does their model outperform a model which has global communication with IR?
Why do IRIC and IC work worst in the medium in comparison to hard in TJ in table1?
Why is CommNet work worse than IRIC and IC in table 2? |
ICLR | Title
Global View For GCN: Why Go Deep When You Can Be Shallow?
Abstract
Existing graph convolutional network (GCN) methods attempt to expand the receptive field of its convolution by either stacking up more convolutional layers or accumulating multi-hop adjacency matrices. Either approach increases computation complexity while providing a limited view of the network topology. We propose to extend k-hop adjacency matrices into one generalized exponential matrix to provide GCNs with a global overview of the network topology. This technique allows the GCNs to learn global topology without going deep and with much fewer parameters than most state-of-the-art GCNs, challenging the common assumption that deep GCNs are empirically better for learning global features. We show a significant improvement in performance in semi-supervised learning when this technique is used for common GCNs while maintaining much shallower network architectures (≤ 4 layers) than the existing ones.
1 INTRODUCTION
Graph neural network (GNN) was introduced by Gori et al. (2005) and Scarselli et al. (2009) to generalize the existing neural network approaches to process data with graph representations. It is widely used in fields such as drug discovery (Jiang et al. (2020)), protein prediction (Jumper et al. (2021)), e-commerce, social networks, and molecular chemistry (Wu et al. (2021)) where data are naturally expressed in forms of graphs. While traditional neural networks are only able to perform predictions based on data inputs, GNNs benefit from using versatile graph data structures to provide a more structural and robust prediction.
Graph convolutional networks (GCNs) (Bruna et al. (2014), Kipf & Welling (2017)) extend convolutional neural networks to GNNs by enabling local-level convolution over each graph node. In particular, the main approach consists of two steps: aggregation and update. First, each node aggregates the feature vectors of the neighboring nodes, including that of the node itself, to accumulate local structural information. Second, each aggregated node feature vector is updated by fully connected layers to improve the node feature representation.
GCN uses the adjacency matrix for learning over local neighborhoods, in particular 1-hop neighborhoods. Since long-path dependency is ignored at local levels, GCN is limited to learning only the local structures while missing the global characteristics of the entire graph. As a result, a deeper GCN (Li et al. (2019)) is often sought after; one can expand the receptive field of GCN with the concatenation of each graph convolutional layer. However, this causes over-smoothing (Li et al. (2018)) where each neighborhood has a similar and indistinguishable feature vector, resulting in a sharp drop in prediction accuracy and graph representation skills (Zhao & Akoglu (2019)). Hence, this creates a dilemma: while a deeper GCN can achieve a wider receptive field, it can also negatively affect test performance.
A series of works from Li et al. (2021); Chen et al. (2020); Rong et al. (2019); Hasanzadeh et al. (2020) introduces various techniques, including initial residual learning, normalization, and dropout to mitigate the impact of over-smoothing while employing deep GCNs. Yet, the issue of deep GCNs is not completely resolved.
In this work, we propose GlobalGCN to fundamentally overcome the dilemma and significantly reduce the computational cost. GlobalGCN generates a topological representation of the entire graph structure via one global attention matrix. It uses matrix exponential to summarize the global depen-
dence between each node, thereby providing each node with global information about its neighborhood nodes. As a result, we can avoid over-smoothing feature vectors by restricting our GCNs over shallow networks (as low as 4 layers).
In summary, we make four contributions. First, we introduce the concept of global attention matrix (GAM) to enable convolution with the largest possible receptive field. Second, we provide mathematical intuitions behind the GAM with respect to its impacts on GNNs. Third, we are able to use the GAM to have a better interpretation of how a graph is structured and how well a graph can be learned. Lastly, we empirically validate our theoretical analysis and show that global topological information helps GNNs to gain higher accuracy with fewer parameters and shallower networks in semi-supervised learning settings.
2 GLOBAL-STRUCTURE-AWARE CONVOLUTION
In this section, we provide both practical and theoretical motivation for our proposed model. Even though the adjacency matrix is able to detect the structure of a graph, it is bounded to its local view and cannot directly incorporate the global characteristic of the graph. Consequently, we define the global attention matrix (GAM) to describe the network topology. Definition 2.1. Consider an undirected graph G = (V,E) where V is a set of vertices, or nodes, and E is a set of edges between vertices. An adjacency matrix A is given by Aij = 1 if there exists an edge e ∈ E connecting the ith node Vi ∈ V and the j th node Vj ∈ V , and 0 otherwise. We define
exp(A) = Σi≥0 Ai
i! (1)
as the global attention matrix (GAM) that describes the global topology of the network.
The intuition behind this definition is as follows. For a given graph G, its adjacency matrix A, and a positive integer k, (Ak)ij describes the number of k-hop paths from node Vi to node Vj in G. A large value of (Ak)ij means more k-hop similarities between node Vi and node Vj . This similarity value can be thereby regarded as an importance weight between node Vi and node Vj (at k-hop level). This intuition is summarized in the following lemma. Lemma 2.1. If there exists a n-hop path between node Vi and node Vj in an undirected graph G = (V,E), exp(A)ij ̸= 0.
The factorial division term in equation (1) performs two tasks. First, it factorially decays the importance weight as the number of hop of paths increases. Therefore, the similarity value for closer nodes in terms of shortest-path distance is naturally favored. Second, due to the factorial division term, the GAM is mathematically stable in the sense that its term converges to 0 rapidly enough so that the total infinite sum is guaranteed to exist.
2.1 CONVERGENCE OF GAM
We use the matrix theory to understand some properties of the GAM, especially its decaying characteristic. We aim to have a very long-distance relationship be reduced as fast as possible because graph dependencies between two majorly distant nodes should be minimum, even if they are connected by certain underlying paths.
Lemma 2.2. For a normalized adjacency matrix à = D̂−1/2ÂD̂−1/2 where  = A + I and D̂ is the degree matrix of Â,
lim k→∞ ∥Ãk∥F k! = 0 (2)
with convergence rate O(C k
k! ) where C is a positive constant and ∥ · ∥F the Frobenius norm.
Proof. Based on the definition of the normalized adjacency matrix, Ãij = Âij√ di √ dj , where di and
dj are given by corresponding diagonal entries of D̂. Since à is symmetric, the decomposition of à is given by
à = QΛQT (3)
where Q is an orthonormal matrix and Λ is a diagonal matrix with the eigenvalues of à on the diagonal entries. Then for a nonnegative integer k,
Ãk = QΛkQT . (4)
Therefore, the Frobenius norm of Ãk/k! is bounded above by
∥Ãk∥F k! = ∥QΛkQT ∥F k! =
√ Tr(Λ2k)
k! ≤
√ N · σmax(|Λ| k)
k! ≤ C1( √ N) · Ck2 k!
(5)
for positive constants C1, C2, where σmax(·), Tr(·) indicate the maximum eigenvalue and the trace of a given matrix respectively. N denotes the number of nodes of the graph associated with the adjacency matrix A and |Λ| = diag(|λ1|, ..., |λN |). Remark. C1 depends on √ N . As the number of nodes of a graph increases, more additive power terms (i.e. Ãk/k!) contribute to the GAM due to the weighting effect of C1. Therefore, longer path dependencies are captured.
Both lemmas suggest that the GAM is able to learn two important features. As the norm of kth power of the adjacency matrix divided by factorial terms is rapidly decaying, the GAM captures not only the connectedness of two nodes regardless of the length of the path but also the similarity between two nodes where the connection by long-distance path is heavily penalized by decay terms.
2.2 GLOBALGCN
Based on our previous theoretical analysis, we propose a novel GCN architecture using the GAM called GlobalGCN, where we replace the adjacency matrix in GCN with the GAM.
H(l+1) = σ(Dropout(exp(A)) ·H(l) · Dropout(W (l))) (6)
where σ is an activation function, W (·) a weight matrix and H(·) a feature matrix. For the experiments, we set σ as ReLU activation function (Agarap (2018)).
Dropout is used over both the GAM and weight matrices for particular purposes. Dropout in the GAM is necessary for two reasons. First, the entry values of the GAM are dominated by a small proportion of large values while the vast majority center around zero. This leads GlobalGCN to have a strong tendency towards learning by dominant edge connections unless dropout is performed over the GAM. Dropout makes it possible to learn less important edges, and thus, the underlying graph structures can be considered. Second, by using dropout, GlobalGCN is trained over different subgraphs at each iteration. The final prediction can be interpreted as an ensemble of subgraph predictions, making the neural network more robust.
In weight matrices, dropout is used for regularization; it prevents neural networks from overly relying on certain neurons.
3 RELATED WORK
GCN (Kipf & Welling (2017)) performs convolution over graphs by first aggregating node features of neighborhoods and then updating the node feature itself with the weight matrix.
H(l+1) = σ(ÃH(l)W (l)) (7)
where à = D̂−1/2ÂD̂−1/2. This approach suffers from over-smoothing as the number of layers becomes larger and thereby cannot incorporate the global topology of a graph.
GAT (Veličković et al. (2018)) updates each feature vector of nodes with fully connected layers and finds the self-attention matrix over each individual node. Afterward, it makes the weighted sum of the updated feature vectors over the neighborhood of each node. This method does not fully rely on the given structural information. Attention introduces extra parameters and creates computation/learning overheads. The spatial and computational complexity increases quadratically
with respect to the number of nodes. Furthermore, there is no guarantee that long-path dependency can be fully captured by the attention matrix.
PPNP (Klicpera et al. (2018)) uses personalized PageRank to generalize the graph structure and uses a multilayer perceptron (MLP) over feature vectors via
H = α(In − (1− α)Ã)−1fθ(X) (8) where X is the input feature matrix and fθ(·) is an MLP. The personalized PageRank term can be interpreted as (In − (1− α)Ã)−1 = Σi≥0((1− α)Ã)i (9) which is a geometric sum of normalized weight matrices. This approach attempts to incorporate graph structural information; however, its weight may not decay fast enough to penalize connections of nodes far from each other.
Clutser-GCN (Chiang et al. (2019)) performs clustering over nodes and formulates random subgraph structures. Afterwards, it restricts neighborhoods over the subgraph structure and forces each node to only learn from neighborhoods inside each cluster. The clustering method is not guaranteed to provide a correct community detection, and if it is incorrectly clustered, the learned sub-graph can be misleading and cannot grasp the topology of the entire graph.
N-GCN (Abu-El-Haija et al. (2020)) trains GCN over multiple n-hop adjacency matrices. It concatenates the feature representation from each branch and uses MLP to predict the long features. Although this method focuses on widening the perspective over n-hop, it is restricted to maximum n-hop neighborhoods and requires much more parameters to train multiple branches using 1 to n-hop adjacency matrices.
DeepGCN (Li et al. (2019)) introduces residual learning (skip connection to avoid gradient vanishing) and k-nearest neighbors clustering to find the local community such that the effect of oversmoothing is minimized. The approach is, however, limited to smaller local communities and is not able to grasp the entire graph structure. In addition, deep structure introduces more parameters.
JKNet (Xu et al. (2018)) combines all intermediate representations as [H(1), ...,H(K)] to learn the new representations over different hop neighborhoods. The authors prove that k-layer GCN is essentially performing random walks, and stacking them relieves the over-smoothing by having multiple random walks. However, this approach is limited to k-hop neighborhoods at most; it cannot keep track of wider neighborhood behaviors.
GraphSAGE (Hamilton et al. (2017)) performs long short-term memory (LSTM) aggregation over local neighborhoods, and updates each node feature vector based on aggregated feature vectors from LSTM. The method is still limited to the 1-hop neighborhood, and thus, is unable to view the global structure of the graph.
DropEdge (Rong et al. (2019)) implements random dropout over adjacency matrix such that H(l+1) = σ(Dropout(Ã)H(l)W (l)). (10)
This method becomes problematic when dropout removes critical edges where the most connection occurs. Consider, for example, a bipartite graph structure where two communities are connected by one singular edge. If that edge is dropped, the connected community is separated and is transformed into a graph with a completely different structure.
RevGNN-Deep (Li et al. (2021)) introduces a deep GCN structure by performing residual learning and normalization over feature vectors. This method requires much more parameters and a long time for training. It is restricted to the 1-hop adjacency matrix, and cannot capture the long-path dependency between two nodes.
The aforementioned papers focus on either going deep while reducing the over-smoothing effect or stacking feature vectors to increase the receptive field of the convolution. Yet, none of them focuses on ensembling multiple adjacency matrices to form a global-level adjacency matrix.
4 EXPERIMENTS
We validate GlobalGCN in semi-supervised document classification in citation networks and conduct multiple experiments over these datasets, as explained below.
4.1 DATASET
We use three citation networks for the semi-supervised learning experiments of GlobalGCN. The statistics of each dataset are summarized in Table 1. The citation network datasets (Cora, Citeseer and Pubmed) (Sen et al. (2008)) contain sparse feature vectors for each document and a list of citation links between documents. We consider all citation links as undirected edges and each document as a node, creating an undirected and unweighted graph G = (V,E) with A as the adjacency matrix of the graph. Each document has a class label, and we use public splits of 20 labels per class and all feature vectors for training.
4.2 EXPERIMENTAL SETUP
We perform Bayesian optimization (Nogueira (2014–)) over GlobalGCN to maximize the validation accuracy by optimizing hyperparameters, e.g. the learning rate, the number of layers, the dimension of hidden layers, and the L2-regularization weight.
We iterate the Bayesian optimization 1000 times, and for each iteration, we train GlobalGCN maximum 400 epochs using Adam. Under the same setting, we train GlobalGCN maximum 200 epochs for Pubmed, as the network is much denser and slower for computation. We use StepLR with a step size of 500 and a gamma of 0.3 for learning rate decay. We keep the best model for each iteration and stop training once the best validation loss is (≥ 10%) larger than the validation loss. We fix the dropout rate for both weight dropout and GAM dropout as 0.6 and initialize all weights with Xavier initialization (Glorot & Bengio (2010)).
We use negative log-likelihood loss with L2 regularization multiplied by the lambda weight.
4.3 BASELINES
We use GCN (Kipf & Welling (2017)), GAT (Veličković et al. (2018)), APPNP (Klicpera et al. (2018)), JKNet (Xu et al. (2018)), N-GCN (Abu-El-Haija et al. (2020)), HGCN (Hu et al. (2019)), and GraphAir (Hu et al. (2020)) as our baseline models. The results are reported in Abu-El-Haija et al. (2020) or Chen et al. (2020), or their respective papers. Our experiment setup is different from theirs as they are tailored to maximize their own accuracy performance. We compare our model with their best possible performance to fairly assess the performance differences.
5 RESULTS
In this section, we evaluate the performance of GlobalGCN against the state-of-the-art (SOTA) GNN models on three semi-supervised learning tasks. We also analyze the properties of the GAM of each dataset for better interpretation of the prediction result.
5.1 COMPARISON WITH SOTA
Table 3 includes experiment results of our GlobalGCN model and the other baseline models. We can see that the prediction accuracy of the GlobalGCN model clearly surpasses the others over all three datasets with much fewer layers (2-4 layers in GlobalGCN). We are able to achieve 85.1% over Cora dataset, 73.0% over Citeseer and 80.1% over Pubmed, outperforming all other models with margins while maintaining lower hidden layer dimensions (43 for Cora, 109 for Citeseer, and 92 for Pubmed).
5.2 EVALUATION OF PROPERTIES OF GAM
In order to grasp a better understanding of the property of the GAM, we analyze the GAMs of 3 different datasets, Cora, Pubmed and Citeseer, and characterize their behavior.
5.2.1 DISTRIBUTION OF ENTRIES OF GAM
Figure 1 illustrates the distribution of entry values of the GAM of each dataset. The similarity among the three datasets is that a majority of entries are located near zeros while the counts decrease exponentially as the entry value increases. This behavior corresponds to the common perception that most graph structures follow the power distribution (Aiello et al. (2001)) where the number of highvalue entries exponentially decays.
Comparing Citeseer with other datasets, Figure 1b shows that the GAM of Citeseer has a nonnegligible amount of big entries around 2.7 separate from the intermediate entry values, while the GAM of Cora or Pubmed has large entry values concentrated at around 1.5.
This can explain why GlobalGCN, or any other model, does not perform well in the Citeseer dataset in comparison with other datasets. There are a few entries of the GAM with distinctively large values which heavily affect the training of GCN on Citeseer. Table 2 shows that the entry values of the GAM of Citeseer has the largest maximum, nonzero mean, and standard deviation among the three datasets, thus confirming that Citeseer has minority edges dominating the learning.
5.2.2 DISTRIBUTION OF EIGENVALUES OF GAM
We perform the singular value decomposition (SVD) over the GAM of each dataset. Figure 2 shows the eigenvalues of the GAM generated by each dataset in descending order. While both Figure 2a and 2b show that there are multiple eigenvectors for the maximum eigenvalue, the GAM of Citeseer in 2b presents a clear salient plateau region of maximum eigenvalues.
This is another possible explanation for why GlobalGCN and all other models perform relatively poorly in Citeseer. The number of eigenvectors for the maximum eigenvalue of the GAM can be related to the number of potential underlying clusters in the graph (Chung (1997)). The GAM of Citeseer has many eigenvectors for the maximum eigenvalue, implying that there is a very rich underlying structure behind the graph. This makes learning more complicated, since GCN or GlobalGCN may focus on learning that substructure instead of capturing the desired global characteristics of the graph.
Figure 3 is a histogram of eigenvalues of the GAM of each dataset. The GAM of Pubmed in Figure 3c has very dense eigenvalue concentrations overall since the graph is fully-connected with much more nodes than the others. Figure 3b shows that for the GAM of Citeseer, eigenvectors with large eigenvalues dominate the distribution of eigenvalues, and this makes the training of GlobalGCN or GNN mainly over clusters associated with those eigenvectors and restricts the field of learning space.
Table 2 supports the explanation since the GAM of Citeseer has the least number of nonzero entries and has a median equal to 0, indicating that most connections are not there. This implies that we have many nodes inside their own clusters without any contact with nodes belonging to other clusters. This complicates the graph structure as learning is not over the entire graph but only over each small cluster.
6 DISCUSSION
6.1 CORE INSIGHTS BEHIND GAM
The GAM can be considered as a weighted sum of fast-decaying filters for the input feature matrix. It captures the similarity level between two nodes in the graph without introducing any extra parameters and is able to capture the global characteristics of any graph. Based on this explanation, we can conclude that GlobalGCN essentially performs both supervised learning and unsupervised learning simultaneously. It clusters the graph based on the adjacency matrix via the GAM and performs supervised learning based on the relationship learned from clustering.
6.2 LIMITATIONS AND FUTURE WORK
We describe several limitations of GlobalGCN and potential future directions for improvements.
6.2.1 LARGE-SCALE TRAINING
The GAM is a dense matrix for our implementation. The size of the matrix increases quadratically with respect to the number of nodes. In contrast, adjacency matrices over large-scale data are usually sparse. Therefore, it is required to utilize the sparsity in order to overcome the memory bottleneck of the GAM computation.
6.2.2 SPECTRAL PRECONDITIONING
According to the experiment results, eigenvalues of the adjacency matrix plays a key role in controlling the converging speed of adjacency matrix power. We can precondition eigenvalues by using the relationship cAx = cλx such that we can modify the size of eigenvalues of the adjacency matrix to enable a wider coverage of neighborhoods. Refer to the Appendix for detailed explanations.
7 CONCLUSION
We have proposed a novel GCN architecture called GlobalGCN which uses the global attention matrix (GAM) from matrix exponential to learn the global topology/structure of the graph. It is able to learn over the graph at the maximum receptive field while taking the similarity between each node into consideration. GlobalGCN shows significant improvement in prediction accuracy over semisupervised learning tasks and is easy to implement. It outperforms SOTA with notable margins while maintaining shallower network architectures (as few as 4 layers) with fewer parameters than most existing GCN architectures.
A APPENDIX
A.1 DISTRIBUTION OF BOUNDS OVER MATRIX NORM
In Lemma 2.2, we provide the upper bound of the Frobenius norm of each term appearing in the infinite sum which defines the GAM (see equation (1)). Here, we test the behavior of the upper bound C k
k! for a positive constant C and a nonnegative integer k. Figure 4a shows the shape of the bound over different values for constant C. This gives an overview of the impact of C over the asymptotic behavior of the bound. As C increases, the bandpass regions become larger, indicating that more neighborhoods are viewed, corresponding to our interpretation of Lemma 2.2. Figure 4b gives a further overview of the behavior of the bound over large C in the log scale. This graph indicates that with an increase of constant terms, the covered neighborhoods would exponentially increase. As a result, we could perform matrix preconditioning over eigenvalues of the normalized adjacency matrix to flexibly adjust the receptive field. | 1. What is the focus and contribution of the paper on GlobalGCN?
2. What are the strengths and weaknesses of the proposed approach, particularly regarding its novelty and comparisons with other works?
3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
4. Are there any concerns or suggestions regarding the paper's methodology or conclusions? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The paper proposed a new GCN architecture called GlobalGCN, which extends the k-hop adjacency matrix to one generalized exponential matrix to learn the global topology of the graph. It claims that the new model can make the GCN perform better without going deep and avoids oversmoothing.
Strengths And Weaknesses
The paper is organized well and clearly written. The description of the global attention matrix is clear, however, the biggest issue is that it lacks of comparison with many similar works.
Utilizing the exponential matrix is not new in the graph learning community. From PPNP/APPNP, SGC, GDC, to GCNII and many other variants, there are quite a lot of similar models with different formulations. Within them, the most similar model to this paper is the GDC [1], which can be seen a generalized form of the GAM model in this paper. The paper ignored most of these works in related works (it only compared with PPNP). Considering this, the novelty of the paper is trivial and important references are not covered.
In addition, nowadays only using the three small citation datasets is not enough. To demonstrate the effectiveness of the model, generally more larger datasets (e.g. ogb datasets) are necessary.
I like the analysis of the entry values in different datasets. However, to make the paper stronger, it would be better to provide possible solutions (e.g. to make the large values have smaller influence in the graph convolution).
[1] Klicpera et al. Diffusion improves graph learning. NeurIPS 2019.
Clarity, Quality, Novelty And Reproducibility
Clarity is good. Novelty is limited. |
ICLR | Title
Global View For GCN: Why Go Deep When You Can Be Shallow?
Abstract
Existing graph convolutional network (GCN) methods attempt to expand the receptive field of its convolution by either stacking up more convolutional layers or accumulating multi-hop adjacency matrices. Either approach increases computation complexity while providing a limited view of the network topology. We propose to extend k-hop adjacency matrices into one generalized exponential matrix to provide GCNs with a global overview of the network topology. This technique allows the GCNs to learn global topology without going deep and with much fewer parameters than most state-of-the-art GCNs, challenging the common assumption that deep GCNs are empirically better for learning global features. We show a significant improvement in performance in semi-supervised learning when this technique is used for common GCNs while maintaining much shallower network architectures (≤ 4 layers) than the existing ones.
1 INTRODUCTION
Graph neural network (GNN) was introduced by Gori et al. (2005) and Scarselli et al. (2009) to generalize the existing neural network approaches to process data with graph representations. It is widely used in fields such as drug discovery (Jiang et al. (2020)), protein prediction (Jumper et al. (2021)), e-commerce, social networks, and molecular chemistry (Wu et al. (2021)) where data are naturally expressed in forms of graphs. While traditional neural networks are only able to perform predictions based on data inputs, GNNs benefit from using versatile graph data structures to provide a more structural and robust prediction.
Graph convolutional networks (GCNs) (Bruna et al. (2014), Kipf & Welling (2017)) extend convolutional neural networks to GNNs by enabling local-level convolution over each graph node. In particular, the main approach consists of two steps: aggregation and update. First, each node aggregates the feature vectors of the neighboring nodes, including that of the node itself, to accumulate local structural information. Second, each aggregated node feature vector is updated by fully connected layers to improve the node feature representation.
GCN uses the adjacency matrix for learning over local neighborhoods, in particular 1-hop neighborhoods. Since long-path dependency is ignored at local levels, GCN is limited to learning only the local structures while missing the global characteristics of the entire graph. As a result, a deeper GCN (Li et al. (2019)) is often sought after; one can expand the receptive field of GCN with the concatenation of each graph convolutional layer. However, this causes over-smoothing (Li et al. (2018)) where each neighborhood has a similar and indistinguishable feature vector, resulting in a sharp drop in prediction accuracy and graph representation skills (Zhao & Akoglu (2019)). Hence, this creates a dilemma: while a deeper GCN can achieve a wider receptive field, it can also negatively affect test performance.
A series of works from Li et al. (2021); Chen et al. (2020); Rong et al. (2019); Hasanzadeh et al. (2020) introduces various techniques, including initial residual learning, normalization, and dropout to mitigate the impact of over-smoothing while employing deep GCNs. Yet, the issue of deep GCNs is not completely resolved.
In this work, we propose GlobalGCN to fundamentally overcome the dilemma and significantly reduce the computational cost. GlobalGCN generates a topological representation of the entire graph structure via one global attention matrix. It uses matrix exponential to summarize the global depen-
dence between each node, thereby providing each node with global information about its neighborhood nodes. As a result, we can avoid over-smoothing feature vectors by restricting our GCNs over shallow networks (as low as 4 layers).
In summary, we make four contributions. First, we introduce the concept of global attention matrix (GAM) to enable convolution with the largest possible receptive field. Second, we provide mathematical intuitions behind the GAM with respect to its impacts on GNNs. Third, we are able to use the GAM to have a better interpretation of how a graph is structured and how well a graph can be learned. Lastly, we empirically validate our theoretical analysis and show that global topological information helps GNNs to gain higher accuracy with fewer parameters and shallower networks in semi-supervised learning settings.
2 GLOBAL-STRUCTURE-AWARE CONVOLUTION
In this section, we provide both practical and theoretical motivation for our proposed model. Even though the adjacency matrix is able to detect the structure of a graph, it is bounded to its local view and cannot directly incorporate the global characteristic of the graph. Consequently, we define the global attention matrix (GAM) to describe the network topology. Definition 2.1. Consider an undirected graph G = (V,E) where V is a set of vertices, or nodes, and E is a set of edges between vertices. An adjacency matrix A is given by Aij = 1 if there exists an edge e ∈ E connecting the ith node Vi ∈ V and the j th node Vj ∈ V , and 0 otherwise. We define
exp(A) = Σi≥0 Ai
i! (1)
as the global attention matrix (GAM) that describes the global topology of the network.
The intuition behind this definition is as follows. For a given graph G, its adjacency matrix A, and a positive integer k, (Ak)ij describes the number of k-hop paths from node Vi to node Vj in G. A large value of (Ak)ij means more k-hop similarities between node Vi and node Vj . This similarity value can be thereby regarded as an importance weight between node Vi and node Vj (at k-hop level). This intuition is summarized in the following lemma. Lemma 2.1. If there exists a n-hop path between node Vi and node Vj in an undirected graph G = (V,E), exp(A)ij ̸= 0.
The factorial division term in equation (1) performs two tasks. First, it factorially decays the importance weight as the number of hop of paths increases. Therefore, the similarity value for closer nodes in terms of shortest-path distance is naturally favored. Second, due to the factorial division term, the GAM is mathematically stable in the sense that its term converges to 0 rapidly enough so that the total infinite sum is guaranteed to exist.
2.1 CONVERGENCE OF GAM
We use the matrix theory to understand some properties of the GAM, especially its decaying characteristic. We aim to have a very long-distance relationship be reduced as fast as possible because graph dependencies between two majorly distant nodes should be minimum, even if they are connected by certain underlying paths.
Lemma 2.2. For a normalized adjacency matrix à = D̂−1/2ÂD̂−1/2 where  = A + I and D̂ is the degree matrix of Â,
lim k→∞ ∥Ãk∥F k! = 0 (2)
with convergence rate O(C k
k! ) where C is a positive constant and ∥ · ∥F the Frobenius norm.
Proof. Based on the definition of the normalized adjacency matrix, Ãij = Âij√ di √ dj , where di and
dj are given by corresponding diagonal entries of D̂. Since à is symmetric, the decomposition of à is given by
à = QΛQT (3)
where Q is an orthonormal matrix and Λ is a diagonal matrix with the eigenvalues of à on the diagonal entries. Then for a nonnegative integer k,
Ãk = QΛkQT . (4)
Therefore, the Frobenius norm of Ãk/k! is bounded above by
∥Ãk∥F k! = ∥QΛkQT ∥F k! =
√ Tr(Λ2k)
k! ≤
√ N · σmax(|Λ| k)
k! ≤ C1( √ N) · Ck2 k!
(5)
for positive constants C1, C2, where σmax(·), Tr(·) indicate the maximum eigenvalue and the trace of a given matrix respectively. N denotes the number of nodes of the graph associated with the adjacency matrix A and |Λ| = diag(|λ1|, ..., |λN |). Remark. C1 depends on √ N . As the number of nodes of a graph increases, more additive power terms (i.e. Ãk/k!) contribute to the GAM due to the weighting effect of C1. Therefore, longer path dependencies are captured.
Both lemmas suggest that the GAM is able to learn two important features. As the norm of kth power of the adjacency matrix divided by factorial terms is rapidly decaying, the GAM captures not only the connectedness of two nodes regardless of the length of the path but also the similarity between two nodes where the connection by long-distance path is heavily penalized by decay terms.
2.2 GLOBALGCN
Based on our previous theoretical analysis, we propose a novel GCN architecture using the GAM called GlobalGCN, where we replace the adjacency matrix in GCN with the GAM.
H(l+1) = σ(Dropout(exp(A)) ·H(l) · Dropout(W (l))) (6)
where σ is an activation function, W (·) a weight matrix and H(·) a feature matrix. For the experiments, we set σ as ReLU activation function (Agarap (2018)).
Dropout is used over both the GAM and weight matrices for particular purposes. Dropout in the GAM is necessary for two reasons. First, the entry values of the GAM are dominated by a small proportion of large values while the vast majority center around zero. This leads GlobalGCN to have a strong tendency towards learning by dominant edge connections unless dropout is performed over the GAM. Dropout makes it possible to learn less important edges, and thus, the underlying graph structures can be considered. Second, by using dropout, GlobalGCN is trained over different subgraphs at each iteration. The final prediction can be interpreted as an ensemble of subgraph predictions, making the neural network more robust.
In weight matrices, dropout is used for regularization; it prevents neural networks from overly relying on certain neurons.
3 RELATED WORK
GCN (Kipf & Welling (2017)) performs convolution over graphs by first aggregating node features of neighborhoods and then updating the node feature itself with the weight matrix.
H(l+1) = σ(ÃH(l)W (l)) (7)
where à = D̂−1/2ÂD̂−1/2. This approach suffers from over-smoothing as the number of layers becomes larger and thereby cannot incorporate the global topology of a graph.
GAT (Veličković et al. (2018)) updates each feature vector of nodes with fully connected layers and finds the self-attention matrix over each individual node. Afterward, it makes the weighted sum of the updated feature vectors over the neighborhood of each node. This method does not fully rely on the given structural information. Attention introduces extra parameters and creates computation/learning overheads. The spatial and computational complexity increases quadratically
with respect to the number of nodes. Furthermore, there is no guarantee that long-path dependency can be fully captured by the attention matrix.
PPNP (Klicpera et al. (2018)) uses personalized PageRank to generalize the graph structure and uses a multilayer perceptron (MLP) over feature vectors via
H = α(In − (1− α)Ã)−1fθ(X) (8) where X is the input feature matrix and fθ(·) is an MLP. The personalized PageRank term can be interpreted as (In − (1− α)Ã)−1 = Σi≥0((1− α)Ã)i (9) which is a geometric sum of normalized weight matrices. This approach attempts to incorporate graph structural information; however, its weight may not decay fast enough to penalize connections of nodes far from each other.
Clutser-GCN (Chiang et al. (2019)) performs clustering over nodes and formulates random subgraph structures. Afterwards, it restricts neighborhoods over the subgraph structure and forces each node to only learn from neighborhoods inside each cluster. The clustering method is not guaranteed to provide a correct community detection, and if it is incorrectly clustered, the learned sub-graph can be misleading and cannot grasp the topology of the entire graph.
N-GCN (Abu-El-Haija et al. (2020)) trains GCN over multiple n-hop adjacency matrices. It concatenates the feature representation from each branch and uses MLP to predict the long features. Although this method focuses on widening the perspective over n-hop, it is restricted to maximum n-hop neighborhoods and requires much more parameters to train multiple branches using 1 to n-hop adjacency matrices.
DeepGCN (Li et al. (2019)) introduces residual learning (skip connection to avoid gradient vanishing) and k-nearest neighbors clustering to find the local community such that the effect of oversmoothing is minimized. The approach is, however, limited to smaller local communities and is not able to grasp the entire graph structure. In addition, deep structure introduces more parameters.
JKNet (Xu et al. (2018)) combines all intermediate representations as [H(1), ...,H(K)] to learn the new representations over different hop neighborhoods. The authors prove that k-layer GCN is essentially performing random walks, and stacking them relieves the over-smoothing by having multiple random walks. However, this approach is limited to k-hop neighborhoods at most; it cannot keep track of wider neighborhood behaviors.
GraphSAGE (Hamilton et al. (2017)) performs long short-term memory (LSTM) aggregation over local neighborhoods, and updates each node feature vector based on aggregated feature vectors from LSTM. The method is still limited to the 1-hop neighborhood, and thus, is unable to view the global structure of the graph.
DropEdge (Rong et al. (2019)) implements random dropout over adjacency matrix such that H(l+1) = σ(Dropout(Ã)H(l)W (l)). (10)
This method becomes problematic when dropout removes critical edges where the most connection occurs. Consider, for example, a bipartite graph structure where two communities are connected by one singular edge. If that edge is dropped, the connected community is separated and is transformed into a graph with a completely different structure.
RevGNN-Deep (Li et al. (2021)) introduces a deep GCN structure by performing residual learning and normalization over feature vectors. This method requires much more parameters and a long time for training. It is restricted to the 1-hop adjacency matrix, and cannot capture the long-path dependency between two nodes.
The aforementioned papers focus on either going deep while reducing the over-smoothing effect or stacking feature vectors to increase the receptive field of the convolution. Yet, none of them focuses on ensembling multiple adjacency matrices to form a global-level adjacency matrix.
4 EXPERIMENTS
We validate GlobalGCN in semi-supervised document classification in citation networks and conduct multiple experiments over these datasets, as explained below.
4.1 DATASET
We use three citation networks for the semi-supervised learning experiments of GlobalGCN. The statistics of each dataset are summarized in Table 1. The citation network datasets (Cora, Citeseer and Pubmed) (Sen et al. (2008)) contain sparse feature vectors for each document and a list of citation links between documents. We consider all citation links as undirected edges and each document as a node, creating an undirected and unweighted graph G = (V,E) with A as the adjacency matrix of the graph. Each document has a class label, and we use public splits of 20 labels per class and all feature vectors for training.
4.2 EXPERIMENTAL SETUP
We perform Bayesian optimization (Nogueira (2014–)) over GlobalGCN to maximize the validation accuracy by optimizing hyperparameters, e.g. the learning rate, the number of layers, the dimension of hidden layers, and the L2-regularization weight.
We iterate the Bayesian optimization 1000 times, and for each iteration, we train GlobalGCN maximum 400 epochs using Adam. Under the same setting, we train GlobalGCN maximum 200 epochs for Pubmed, as the network is much denser and slower for computation. We use StepLR with a step size of 500 and a gamma of 0.3 for learning rate decay. We keep the best model for each iteration and stop training once the best validation loss is (≥ 10%) larger than the validation loss. We fix the dropout rate for both weight dropout and GAM dropout as 0.6 and initialize all weights with Xavier initialization (Glorot & Bengio (2010)).
We use negative log-likelihood loss with L2 regularization multiplied by the lambda weight.
4.3 BASELINES
We use GCN (Kipf & Welling (2017)), GAT (Veličković et al. (2018)), APPNP (Klicpera et al. (2018)), JKNet (Xu et al. (2018)), N-GCN (Abu-El-Haija et al. (2020)), HGCN (Hu et al. (2019)), and GraphAir (Hu et al. (2020)) as our baseline models. The results are reported in Abu-El-Haija et al. (2020) or Chen et al. (2020), or their respective papers. Our experiment setup is different from theirs as they are tailored to maximize their own accuracy performance. We compare our model with their best possible performance to fairly assess the performance differences.
5 RESULTS
In this section, we evaluate the performance of GlobalGCN against the state-of-the-art (SOTA) GNN models on three semi-supervised learning tasks. We also analyze the properties of the GAM of each dataset for better interpretation of the prediction result.
5.1 COMPARISON WITH SOTA
Table 3 includes experiment results of our GlobalGCN model and the other baseline models. We can see that the prediction accuracy of the GlobalGCN model clearly surpasses the others over all three datasets with much fewer layers (2-4 layers in GlobalGCN). We are able to achieve 85.1% over Cora dataset, 73.0% over Citeseer and 80.1% over Pubmed, outperforming all other models with margins while maintaining lower hidden layer dimensions (43 for Cora, 109 for Citeseer, and 92 for Pubmed).
5.2 EVALUATION OF PROPERTIES OF GAM
In order to grasp a better understanding of the property of the GAM, we analyze the GAMs of 3 different datasets, Cora, Pubmed and Citeseer, and characterize their behavior.
5.2.1 DISTRIBUTION OF ENTRIES OF GAM
Figure 1 illustrates the distribution of entry values of the GAM of each dataset. The similarity among the three datasets is that a majority of entries are located near zeros while the counts decrease exponentially as the entry value increases. This behavior corresponds to the common perception that most graph structures follow the power distribution (Aiello et al. (2001)) where the number of highvalue entries exponentially decays.
Comparing Citeseer with other datasets, Figure 1b shows that the GAM of Citeseer has a nonnegligible amount of big entries around 2.7 separate from the intermediate entry values, while the GAM of Cora or Pubmed has large entry values concentrated at around 1.5.
This can explain why GlobalGCN, or any other model, does not perform well in the Citeseer dataset in comparison with other datasets. There are a few entries of the GAM with distinctively large values which heavily affect the training of GCN on Citeseer. Table 2 shows that the entry values of the GAM of Citeseer has the largest maximum, nonzero mean, and standard deviation among the three datasets, thus confirming that Citeseer has minority edges dominating the learning.
5.2.2 DISTRIBUTION OF EIGENVALUES OF GAM
We perform the singular value decomposition (SVD) over the GAM of each dataset. Figure 2 shows the eigenvalues of the GAM generated by each dataset in descending order. While both Figure 2a and 2b show that there are multiple eigenvectors for the maximum eigenvalue, the GAM of Citeseer in 2b presents a clear salient plateau region of maximum eigenvalues.
This is another possible explanation for why GlobalGCN and all other models perform relatively poorly in Citeseer. The number of eigenvectors for the maximum eigenvalue of the GAM can be related to the number of potential underlying clusters in the graph (Chung (1997)). The GAM of Citeseer has many eigenvectors for the maximum eigenvalue, implying that there is a very rich underlying structure behind the graph. This makes learning more complicated, since GCN or GlobalGCN may focus on learning that substructure instead of capturing the desired global characteristics of the graph.
Figure 3 is a histogram of eigenvalues of the GAM of each dataset. The GAM of Pubmed in Figure 3c has very dense eigenvalue concentrations overall since the graph is fully-connected with much more nodes than the others. Figure 3b shows that for the GAM of Citeseer, eigenvectors with large eigenvalues dominate the distribution of eigenvalues, and this makes the training of GlobalGCN or GNN mainly over clusters associated with those eigenvectors and restricts the field of learning space.
Table 2 supports the explanation since the GAM of Citeseer has the least number of nonzero entries and has a median equal to 0, indicating that most connections are not there. This implies that we have many nodes inside their own clusters without any contact with nodes belonging to other clusters. This complicates the graph structure as learning is not over the entire graph but only over each small cluster.
6 DISCUSSION
6.1 CORE INSIGHTS BEHIND GAM
The GAM can be considered as a weighted sum of fast-decaying filters for the input feature matrix. It captures the similarity level between two nodes in the graph without introducing any extra parameters and is able to capture the global characteristics of any graph. Based on this explanation, we can conclude that GlobalGCN essentially performs both supervised learning and unsupervised learning simultaneously. It clusters the graph based on the adjacency matrix via the GAM and performs supervised learning based on the relationship learned from clustering.
6.2 LIMITATIONS AND FUTURE WORK
We describe several limitations of GlobalGCN and potential future directions for improvements.
6.2.1 LARGE-SCALE TRAINING
The GAM is a dense matrix for our implementation. The size of the matrix increases quadratically with respect to the number of nodes. In contrast, adjacency matrices over large-scale data are usually sparse. Therefore, it is required to utilize the sparsity in order to overcome the memory bottleneck of the GAM computation.
6.2.2 SPECTRAL PRECONDITIONING
According to the experiment results, eigenvalues of the adjacency matrix plays a key role in controlling the converging speed of adjacency matrix power. We can precondition eigenvalues by using the relationship cAx = cλx such that we can modify the size of eigenvalues of the adjacency matrix to enable a wider coverage of neighborhoods. Refer to the Appendix for detailed explanations.
7 CONCLUSION
We have proposed a novel GCN architecture called GlobalGCN which uses the global attention matrix (GAM) from matrix exponential to learn the global topology/structure of the graph. It is able to learn over the graph at the maximum receptive field while taking the similarity between each node into consideration. GlobalGCN shows significant improvement in prediction accuracy over semisupervised learning tasks and is easy to implement. It outperforms SOTA with notable margins while maintaining shallower network architectures (as few as 4 layers) with fewer parameters than most existing GCN architectures.
A APPENDIX
A.1 DISTRIBUTION OF BOUNDS OVER MATRIX NORM
In Lemma 2.2, we provide the upper bound of the Frobenius norm of each term appearing in the infinite sum which defines the GAM (see equation (1)). Here, we test the behavior of the upper bound C k
k! for a positive constant C and a nonnegative integer k. Figure 4a shows the shape of the bound over different values for constant C. This gives an overview of the impact of C over the asymptotic behavior of the bound. As C increases, the bandpass regions become larger, indicating that more neighborhoods are viewed, corresponding to our interpretation of Lemma 2.2. Figure 4b gives a further overview of the behavior of the bound over large C in the log scale. This graph indicates that with an increase of constant terms, the covered neighborhoods would exponentially increase. As a result, we could perform matrix preconditioning over eigenvalues of the normalized adjacency matrix to flexibly adjust the receptive field. | 1. What is the focus and contribution of the paper on graph convolutional networks?
2. What are the strengths of the proposed approach, particularly in its novelty and performance?
3. What are the weaknesses of the paper regarding experimental design and efficiency?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any concerns regarding the computational cost and scalability of the proposed method? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper proposes to replace the adjacency matrix used in the feature propagation step of GCN to a newly proposed "global attention matrix". The global attention matrix is defined as the matrix exponential of the adjacency matrix. Additionally, the submission proposed to apply drop out on both the global attention matrix and the weight matrix in each layer of their Graph Convolutional Network.
The submission carried out empirical evaluation on the cora, citeseer, and pubmed datasets and compare favorably to previous method. However, I have several concerns about the experimental design and efficiency of the proposed method, which are detailed below. For now, I am recommending a "weak rejection". If the authors can address my concern in the author response period, I am happy to raise the rating. Otherwise, i would recommend rejection.
Strengths And Weaknesses
Strength
The proposed method is novel
The proposed method performs well, better than previous methods, on cora, citeseer, and pubmed
Concerns re experimental setup
Have the authors performed the same bayesian optimization to the baseline methods? Currently the reported numbers are down with bayesian optimization iterated over 1000 times, which takes a lot of computation. The authors should apply the same method to their baseline methods. 1.1. The authors should report the hyperparameters found by the bayesian optimization.
The experiments are only done on the three small graph benchmarks. I expect to see results on other/larger datasets (e.g. Reddit, Amazon) because I am unconvinced that this method can scale well.
Table 3 needs to report error bar to show the variance. The proposed method performs very similarly to HGCN and GraphAir.
Concerns re efficiency
How do the authors compute the matrix exponential? What is the theoretical efficiency of computing the matrix exponential for both run time and memory in terms of big O? How much actual wall clock time does the method spend on this step for the datasets that the authors evaluated on? The authors should also report this on Reddit/Amazon for the additional experiments that I requested.
In second to last paragraph in the introduction, the submission claims that their method can significantly reduce the computational cost. This claim is not backed up by any empirical result.
Concerns re ablation
For equation, can the authors only compute up to the i-th term and compare the performance? As the authors admitted, the factorial terms decay very quickly, and I am suspecting that for i> 3 or 4, the terms won't actually change the model's downstream performance. In other words, the proposed GlobalGCN can probably be well-approximated by a 3 or 4 hop model.
How important is the dropout? This technique is generally applicable to the baselines such as GCN, N-GCN, etc. I wonder if the key improvement comes the using the dropout.
To isolate the effect of the proposed GAM, can the authors test the 1-layer GCN with GAM (basically replace the adjacency matrix in Simple GCN with their proposed GAM). Because training Simple GCN is convex, this experiment can get rid of the confounding effect of the dropout and other training details.
Clarity, Quality, Novelty And Reproducibility
The writing is clear and I trust the authors that the proposed method is reproducible |
ICLR | Title
Global View For GCN: Why Go Deep When You Can Be Shallow?
Abstract
Existing graph convolutional network (GCN) methods attempt to expand the receptive field of its convolution by either stacking up more convolutional layers or accumulating multi-hop adjacency matrices. Either approach increases computation complexity while providing a limited view of the network topology. We propose to extend k-hop adjacency matrices into one generalized exponential matrix to provide GCNs with a global overview of the network topology. This technique allows the GCNs to learn global topology without going deep and with much fewer parameters than most state-of-the-art GCNs, challenging the common assumption that deep GCNs are empirically better for learning global features. We show a significant improvement in performance in semi-supervised learning when this technique is used for common GCNs while maintaining much shallower network architectures (≤ 4 layers) than the existing ones.
1 INTRODUCTION
Graph neural network (GNN) was introduced by Gori et al. (2005) and Scarselli et al. (2009) to generalize the existing neural network approaches to process data with graph representations. It is widely used in fields such as drug discovery (Jiang et al. (2020)), protein prediction (Jumper et al. (2021)), e-commerce, social networks, and molecular chemistry (Wu et al. (2021)) where data are naturally expressed in forms of graphs. While traditional neural networks are only able to perform predictions based on data inputs, GNNs benefit from using versatile graph data structures to provide a more structural and robust prediction.
Graph convolutional networks (GCNs) (Bruna et al. (2014), Kipf & Welling (2017)) extend convolutional neural networks to GNNs by enabling local-level convolution over each graph node. In particular, the main approach consists of two steps: aggregation and update. First, each node aggregates the feature vectors of the neighboring nodes, including that of the node itself, to accumulate local structural information. Second, each aggregated node feature vector is updated by fully connected layers to improve the node feature representation.
GCN uses the adjacency matrix for learning over local neighborhoods, in particular 1-hop neighborhoods. Since long-path dependency is ignored at local levels, GCN is limited to learning only the local structures while missing the global characteristics of the entire graph. As a result, a deeper GCN (Li et al. (2019)) is often sought after; one can expand the receptive field of GCN with the concatenation of each graph convolutional layer. However, this causes over-smoothing (Li et al. (2018)) where each neighborhood has a similar and indistinguishable feature vector, resulting in a sharp drop in prediction accuracy and graph representation skills (Zhao & Akoglu (2019)). Hence, this creates a dilemma: while a deeper GCN can achieve a wider receptive field, it can also negatively affect test performance.
A series of works from Li et al. (2021); Chen et al. (2020); Rong et al. (2019); Hasanzadeh et al. (2020) introduces various techniques, including initial residual learning, normalization, and dropout to mitigate the impact of over-smoothing while employing deep GCNs. Yet, the issue of deep GCNs is not completely resolved.
In this work, we propose GlobalGCN to fundamentally overcome the dilemma and significantly reduce the computational cost. GlobalGCN generates a topological representation of the entire graph structure via one global attention matrix. It uses matrix exponential to summarize the global depen-
dence between each node, thereby providing each node with global information about its neighborhood nodes. As a result, we can avoid over-smoothing feature vectors by restricting our GCNs over shallow networks (as low as 4 layers).
In summary, we make four contributions. First, we introduce the concept of global attention matrix (GAM) to enable convolution with the largest possible receptive field. Second, we provide mathematical intuitions behind the GAM with respect to its impacts on GNNs. Third, we are able to use the GAM to have a better interpretation of how a graph is structured and how well a graph can be learned. Lastly, we empirically validate our theoretical analysis and show that global topological information helps GNNs to gain higher accuracy with fewer parameters and shallower networks in semi-supervised learning settings.
2 GLOBAL-STRUCTURE-AWARE CONVOLUTION
In this section, we provide both practical and theoretical motivation for our proposed model. Even though the adjacency matrix is able to detect the structure of a graph, it is bounded to its local view and cannot directly incorporate the global characteristic of the graph. Consequently, we define the global attention matrix (GAM) to describe the network topology. Definition 2.1. Consider an undirected graph G = (V,E) where V is a set of vertices, or nodes, and E is a set of edges between vertices. An adjacency matrix A is given by Aij = 1 if there exists an edge e ∈ E connecting the ith node Vi ∈ V and the j th node Vj ∈ V , and 0 otherwise. We define
exp(A) = Σi≥0 Ai
i! (1)
as the global attention matrix (GAM) that describes the global topology of the network.
The intuition behind this definition is as follows. For a given graph G, its adjacency matrix A, and a positive integer k, (Ak)ij describes the number of k-hop paths from node Vi to node Vj in G. A large value of (Ak)ij means more k-hop similarities between node Vi and node Vj . This similarity value can be thereby regarded as an importance weight between node Vi and node Vj (at k-hop level). This intuition is summarized in the following lemma. Lemma 2.1. If there exists a n-hop path between node Vi and node Vj in an undirected graph G = (V,E), exp(A)ij ̸= 0.
The factorial division term in equation (1) performs two tasks. First, it factorially decays the importance weight as the number of hop of paths increases. Therefore, the similarity value for closer nodes in terms of shortest-path distance is naturally favored. Second, due to the factorial division term, the GAM is mathematically stable in the sense that its term converges to 0 rapidly enough so that the total infinite sum is guaranteed to exist.
2.1 CONVERGENCE OF GAM
We use the matrix theory to understand some properties of the GAM, especially its decaying characteristic. We aim to have a very long-distance relationship be reduced as fast as possible because graph dependencies between two majorly distant nodes should be minimum, even if they are connected by certain underlying paths.
Lemma 2.2. For a normalized adjacency matrix à = D̂−1/2ÂD̂−1/2 where  = A + I and D̂ is the degree matrix of Â,
lim k→∞ ∥Ãk∥F k! = 0 (2)
with convergence rate O(C k
k! ) where C is a positive constant and ∥ · ∥F the Frobenius norm.
Proof. Based on the definition of the normalized adjacency matrix, Ãij = Âij√ di √ dj , where di and
dj are given by corresponding diagonal entries of D̂. Since à is symmetric, the decomposition of à is given by
à = QΛQT (3)
where Q is an orthonormal matrix and Λ is a diagonal matrix with the eigenvalues of à on the diagonal entries. Then for a nonnegative integer k,
Ãk = QΛkQT . (4)
Therefore, the Frobenius norm of Ãk/k! is bounded above by
∥Ãk∥F k! = ∥QΛkQT ∥F k! =
√ Tr(Λ2k)
k! ≤
√ N · σmax(|Λ| k)
k! ≤ C1( √ N) · Ck2 k!
(5)
for positive constants C1, C2, where σmax(·), Tr(·) indicate the maximum eigenvalue and the trace of a given matrix respectively. N denotes the number of nodes of the graph associated with the adjacency matrix A and |Λ| = diag(|λ1|, ..., |λN |). Remark. C1 depends on √ N . As the number of nodes of a graph increases, more additive power terms (i.e. Ãk/k!) contribute to the GAM due to the weighting effect of C1. Therefore, longer path dependencies are captured.
Both lemmas suggest that the GAM is able to learn two important features. As the norm of kth power of the adjacency matrix divided by factorial terms is rapidly decaying, the GAM captures not only the connectedness of two nodes regardless of the length of the path but also the similarity between two nodes where the connection by long-distance path is heavily penalized by decay terms.
2.2 GLOBALGCN
Based on our previous theoretical analysis, we propose a novel GCN architecture using the GAM called GlobalGCN, where we replace the adjacency matrix in GCN with the GAM.
H(l+1) = σ(Dropout(exp(A)) ·H(l) · Dropout(W (l))) (6)
where σ is an activation function, W (·) a weight matrix and H(·) a feature matrix. For the experiments, we set σ as ReLU activation function (Agarap (2018)).
Dropout is used over both the GAM and weight matrices for particular purposes. Dropout in the GAM is necessary for two reasons. First, the entry values of the GAM are dominated by a small proportion of large values while the vast majority center around zero. This leads GlobalGCN to have a strong tendency towards learning by dominant edge connections unless dropout is performed over the GAM. Dropout makes it possible to learn less important edges, and thus, the underlying graph structures can be considered. Second, by using dropout, GlobalGCN is trained over different subgraphs at each iteration. The final prediction can be interpreted as an ensemble of subgraph predictions, making the neural network more robust.
In weight matrices, dropout is used for regularization; it prevents neural networks from overly relying on certain neurons.
3 RELATED WORK
GCN (Kipf & Welling (2017)) performs convolution over graphs by first aggregating node features of neighborhoods and then updating the node feature itself with the weight matrix.
H(l+1) = σ(ÃH(l)W (l)) (7)
where à = D̂−1/2ÂD̂−1/2. This approach suffers from over-smoothing as the number of layers becomes larger and thereby cannot incorporate the global topology of a graph.
GAT (Veličković et al. (2018)) updates each feature vector of nodes with fully connected layers and finds the self-attention matrix over each individual node. Afterward, it makes the weighted sum of the updated feature vectors over the neighborhood of each node. This method does not fully rely on the given structural information. Attention introduces extra parameters and creates computation/learning overheads. The spatial and computational complexity increases quadratically
with respect to the number of nodes. Furthermore, there is no guarantee that long-path dependency can be fully captured by the attention matrix.
PPNP (Klicpera et al. (2018)) uses personalized PageRank to generalize the graph structure and uses a multilayer perceptron (MLP) over feature vectors via
H = α(In − (1− α)Ã)−1fθ(X) (8) where X is the input feature matrix and fθ(·) is an MLP. The personalized PageRank term can be interpreted as (In − (1− α)Ã)−1 = Σi≥0((1− α)Ã)i (9) which is a geometric sum of normalized weight matrices. This approach attempts to incorporate graph structural information; however, its weight may not decay fast enough to penalize connections of nodes far from each other.
Clutser-GCN (Chiang et al. (2019)) performs clustering over nodes and formulates random subgraph structures. Afterwards, it restricts neighborhoods over the subgraph structure and forces each node to only learn from neighborhoods inside each cluster. The clustering method is not guaranteed to provide a correct community detection, and if it is incorrectly clustered, the learned sub-graph can be misleading and cannot grasp the topology of the entire graph.
N-GCN (Abu-El-Haija et al. (2020)) trains GCN over multiple n-hop adjacency matrices. It concatenates the feature representation from each branch and uses MLP to predict the long features. Although this method focuses on widening the perspective over n-hop, it is restricted to maximum n-hop neighborhoods and requires much more parameters to train multiple branches using 1 to n-hop adjacency matrices.
DeepGCN (Li et al. (2019)) introduces residual learning (skip connection to avoid gradient vanishing) and k-nearest neighbors clustering to find the local community such that the effect of oversmoothing is minimized. The approach is, however, limited to smaller local communities and is not able to grasp the entire graph structure. In addition, deep structure introduces more parameters.
JKNet (Xu et al. (2018)) combines all intermediate representations as [H(1), ...,H(K)] to learn the new representations over different hop neighborhoods. The authors prove that k-layer GCN is essentially performing random walks, and stacking them relieves the over-smoothing by having multiple random walks. However, this approach is limited to k-hop neighborhoods at most; it cannot keep track of wider neighborhood behaviors.
GraphSAGE (Hamilton et al. (2017)) performs long short-term memory (LSTM) aggregation over local neighborhoods, and updates each node feature vector based on aggregated feature vectors from LSTM. The method is still limited to the 1-hop neighborhood, and thus, is unable to view the global structure of the graph.
DropEdge (Rong et al. (2019)) implements random dropout over adjacency matrix such that H(l+1) = σ(Dropout(Ã)H(l)W (l)). (10)
This method becomes problematic when dropout removes critical edges where the most connection occurs. Consider, for example, a bipartite graph structure where two communities are connected by one singular edge. If that edge is dropped, the connected community is separated and is transformed into a graph with a completely different structure.
RevGNN-Deep (Li et al. (2021)) introduces a deep GCN structure by performing residual learning and normalization over feature vectors. This method requires much more parameters and a long time for training. It is restricted to the 1-hop adjacency matrix, and cannot capture the long-path dependency between two nodes.
The aforementioned papers focus on either going deep while reducing the over-smoothing effect or stacking feature vectors to increase the receptive field of the convolution. Yet, none of them focuses on ensembling multiple adjacency matrices to form a global-level adjacency matrix.
4 EXPERIMENTS
We validate GlobalGCN in semi-supervised document classification in citation networks and conduct multiple experiments over these datasets, as explained below.
4.1 DATASET
We use three citation networks for the semi-supervised learning experiments of GlobalGCN. The statistics of each dataset are summarized in Table 1. The citation network datasets (Cora, Citeseer and Pubmed) (Sen et al. (2008)) contain sparse feature vectors for each document and a list of citation links between documents. We consider all citation links as undirected edges and each document as a node, creating an undirected and unweighted graph G = (V,E) with A as the adjacency matrix of the graph. Each document has a class label, and we use public splits of 20 labels per class and all feature vectors for training.
4.2 EXPERIMENTAL SETUP
We perform Bayesian optimization (Nogueira (2014–)) over GlobalGCN to maximize the validation accuracy by optimizing hyperparameters, e.g. the learning rate, the number of layers, the dimension of hidden layers, and the L2-regularization weight.
We iterate the Bayesian optimization 1000 times, and for each iteration, we train GlobalGCN maximum 400 epochs using Adam. Under the same setting, we train GlobalGCN maximum 200 epochs for Pubmed, as the network is much denser and slower for computation. We use StepLR with a step size of 500 and a gamma of 0.3 for learning rate decay. We keep the best model for each iteration and stop training once the best validation loss is (≥ 10%) larger than the validation loss. We fix the dropout rate for both weight dropout and GAM dropout as 0.6 and initialize all weights with Xavier initialization (Glorot & Bengio (2010)).
We use negative log-likelihood loss with L2 regularization multiplied by the lambda weight.
4.3 BASELINES
We use GCN (Kipf & Welling (2017)), GAT (Veličković et al. (2018)), APPNP (Klicpera et al. (2018)), JKNet (Xu et al. (2018)), N-GCN (Abu-El-Haija et al. (2020)), HGCN (Hu et al. (2019)), and GraphAir (Hu et al. (2020)) as our baseline models. The results are reported in Abu-El-Haija et al. (2020) or Chen et al. (2020), or their respective papers. Our experiment setup is different from theirs as they are tailored to maximize their own accuracy performance. We compare our model with their best possible performance to fairly assess the performance differences.
5 RESULTS
In this section, we evaluate the performance of GlobalGCN against the state-of-the-art (SOTA) GNN models on three semi-supervised learning tasks. We also analyze the properties of the GAM of each dataset for better interpretation of the prediction result.
5.1 COMPARISON WITH SOTA
Table 3 includes experiment results of our GlobalGCN model and the other baseline models. We can see that the prediction accuracy of the GlobalGCN model clearly surpasses the others over all three datasets with much fewer layers (2-4 layers in GlobalGCN). We are able to achieve 85.1% over Cora dataset, 73.0% over Citeseer and 80.1% over Pubmed, outperforming all other models with margins while maintaining lower hidden layer dimensions (43 for Cora, 109 for Citeseer, and 92 for Pubmed).
5.2 EVALUATION OF PROPERTIES OF GAM
In order to grasp a better understanding of the property of the GAM, we analyze the GAMs of 3 different datasets, Cora, Pubmed and Citeseer, and characterize their behavior.
5.2.1 DISTRIBUTION OF ENTRIES OF GAM
Figure 1 illustrates the distribution of entry values of the GAM of each dataset. The similarity among the three datasets is that a majority of entries are located near zeros while the counts decrease exponentially as the entry value increases. This behavior corresponds to the common perception that most graph structures follow the power distribution (Aiello et al. (2001)) where the number of highvalue entries exponentially decays.
Comparing Citeseer with other datasets, Figure 1b shows that the GAM of Citeseer has a nonnegligible amount of big entries around 2.7 separate from the intermediate entry values, while the GAM of Cora or Pubmed has large entry values concentrated at around 1.5.
This can explain why GlobalGCN, or any other model, does not perform well in the Citeseer dataset in comparison with other datasets. There are a few entries of the GAM with distinctively large values which heavily affect the training of GCN on Citeseer. Table 2 shows that the entry values of the GAM of Citeseer has the largest maximum, nonzero mean, and standard deviation among the three datasets, thus confirming that Citeseer has minority edges dominating the learning.
5.2.2 DISTRIBUTION OF EIGENVALUES OF GAM
We perform the singular value decomposition (SVD) over the GAM of each dataset. Figure 2 shows the eigenvalues of the GAM generated by each dataset in descending order. While both Figure 2a and 2b show that there are multiple eigenvectors for the maximum eigenvalue, the GAM of Citeseer in 2b presents a clear salient plateau region of maximum eigenvalues.
This is another possible explanation for why GlobalGCN and all other models perform relatively poorly in Citeseer. The number of eigenvectors for the maximum eigenvalue of the GAM can be related to the number of potential underlying clusters in the graph (Chung (1997)). The GAM of Citeseer has many eigenvectors for the maximum eigenvalue, implying that there is a very rich underlying structure behind the graph. This makes learning more complicated, since GCN or GlobalGCN may focus on learning that substructure instead of capturing the desired global characteristics of the graph.
Figure 3 is a histogram of eigenvalues of the GAM of each dataset. The GAM of Pubmed in Figure 3c has very dense eigenvalue concentrations overall since the graph is fully-connected with much more nodes than the others. Figure 3b shows that for the GAM of Citeseer, eigenvectors with large eigenvalues dominate the distribution of eigenvalues, and this makes the training of GlobalGCN or GNN mainly over clusters associated with those eigenvectors and restricts the field of learning space.
Table 2 supports the explanation since the GAM of Citeseer has the least number of nonzero entries and has a median equal to 0, indicating that most connections are not there. This implies that we have many nodes inside their own clusters without any contact with nodes belonging to other clusters. This complicates the graph structure as learning is not over the entire graph but only over each small cluster.
6 DISCUSSION
6.1 CORE INSIGHTS BEHIND GAM
The GAM can be considered as a weighted sum of fast-decaying filters for the input feature matrix. It captures the similarity level between two nodes in the graph without introducing any extra parameters and is able to capture the global characteristics of any graph. Based on this explanation, we can conclude that GlobalGCN essentially performs both supervised learning and unsupervised learning simultaneously. It clusters the graph based on the adjacency matrix via the GAM and performs supervised learning based on the relationship learned from clustering.
6.2 LIMITATIONS AND FUTURE WORK
We describe several limitations of GlobalGCN and potential future directions for improvements.
6.2.1 LARGE-SCALE TRAINING
The GAM is a dense matrix for our implementation. The size of the matrix increases quadratically with respect to the number of nodes. In contrast, adjacency matrices over large-scale data are usually sparse. Therefore, it is required to utilize the sparsity in order to overcome the memory bottleneck of the GAM computation.
6.2.2 SPECTRAL PRECONDITIONING
According to the experiment results, eigenvalues of the adjacency matrix plays a key role in controlling the converging speed of adjacency matrix power. We can precondition eigenvalues by using the relationship cAx = cλx such that we can modify the size of eigenvalues of the adjacency matrix to enable a wider coverage of neighborhoods. Refer to the Appendix for detailed explanations.
7 CONCLUSION
We have proposed a novel GCN architecture called GlobalGCN which uses the global attention matrix (GAM) from matrix exponential to learn the global topology/structure of the graph. It is able to learn over the graph at the maximum receptive field while taking the similarity between each node into consideration. GlobalGCN shows significant improvement in prediction accuracy over semisupervised learning tasks and is easy to implement. It outperforms SOTA with notable margins while maintaining shallower network architectures (as few as 4 layers) with fewer parameters than most existing GCN architectures.
A APPENDIX
A.1 DISTRIBUTION OF BOUNDS OVER MATRIX NORM
In Lemma 2.2, we provide the upper bound of the Frobenius norm of each term appearing in the infinite sum which defines the GAM (see equation (1)). Here, we test the behavior of the upper bound C k
k! for a positive constant C and a nonnegative integer k. Figure 4a shows the shape of the bound over different values for constant C. This gives an overview of the impact of C over the asymptotic behavior of the bound. As C increases, the bandpass regions become larger, indicating that more neighborhoods are viewed, corresponding to our interpretation of Lemma 2.2. Figure 4b gives a further overview of the behavior of the bound over large C in the log scale. This graph indicates that with an increase of constant terms, the covered neighborhoods would exponentially increase. As a result, we could perform matrix preconditioning over eigenvalues of the normalized adjacency matrix to flexibly adjust the receptive field. | 1. What is the focus of the paper, particularly in terms of its contributions to graph neural networks (GNNs)?
2. What are the strengths and weaknesses of the proposed approach, especially regarding its ability to perform infinite steps of graph propagation in one layer?
3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
4. Are there any concerns or suggestions regarding the experimental evaluation of the proposed model, such as using up-to-date datasets or correcting the dropout design? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This work tries to design a GNN with infinite steps of graph propagation in one layer via computing a global attention matrix. The proposed model is evaluated on three classic node-classification tasks.
Strengths And Weaknesses
Weakness:
Actually the largest eigenvalues of
A
~
is upper-bounded by 1. So
C
2
in Eq(5) can be ignored, as well as the discussion in Appendix A.
The chosen datasets are a little outdated. It would be better to include some larger datasets, such as ogb-products and ogb-arxiv.
The dropout design of
W
(
l
)
in Eq(6) shall be corrected.
Clarity, Quality, Novelty And Reproducibility
Its clarity, quality and novelty are poor. |
ICLR | Title
Global View For GCN: Why Go Deep When You Can Be Shallow?
Abstract
Existing graph convolutional network (GCN) methods attempt to expand the receptive field of its convolution by either stacking up more convolutional layers or accumulating multi-hop adjacency matrices. Either approach increases computation complexity while providing a limited view of the network topology. We propose to extend k-hop adjacency matrices into one generalized exponential matrix to provide GCNs with a global overview of the network topology. This technique allows the GCNs to learn global topology without going deep and with much fewer parameters than most state-of-the-art GCNs, challenging the common assumption that deep GCNs are empirically better for learning global features. We show a significant improvement in performance in semi-supervised learning when this technique is used for common GCNs while maintaining much shallower network architectures (≤ 4 layers) than the existing ones.
1 INTRODUCTION
Graph neural network (GNN) was introduced by Gori et al. (2005) and Scarselli et al. (2009) to generalize the existing neural network approaches to process data with graph representations. It is widely used in fields such as drug discovery (Jiang et al. (2020)), protein prediction (Jumper et al. (2021)), e-commerce, social networks, and molecular chemistry (Wu et al. (2021)) where data are naturally expressed in forms of graphs. While traditional neural networks are only able to perform predictions based on data inputs, GNNs benefit from using versatile graph data structures to provide a more structural and robust prediction.
Graph convolutional networks (GCNs) (Bruna et al. (2014), Kipf & Welling (2017)) extend convolutional neural networks to GNNs by enabling local-level convolution over each graph node. In particular, the main approach consists of two steps: aggregation and update. First, each node aggregates the feature vectors of the neighboring nodes, including that of the node itself, to accumulate local structural information. Second, each aggregated node feature vector is updated by fully connected layers to improve the node feature representation.
GCN uses the adjacency matrix for learning over local neighborhoods, in particular 1-hop neighborhoods. Since long-path dependency is ignored at local levels, GCN is limited to learning only the local structures while missing the global characteristics of the entire graph. As a result, a deeper GCN (Li et al. (2019)) is often sought after; one can expand the receptive field of GCN with the concatenation of each graph convolutional layer. However, this causes over-smoothing (Li et al. (2018)) where each neighborhood has a similar and indistinguishable feature vector, resulting in a sharp drop in prediction accuracy and graph representation skills (Zhao & Akoglu (2019)). Hence, this creates a dilemma: while a deeper GCN can achieve a wider receptive field, it can also negatively affect test performance.
A series of works from Li et al. (2021); Chen et al. (2020); Rong et al. (2019); Hasanzadeh et al. (2020) introduces various techniques, including initial residual learning, normalization, and dropout to mitigate the impact of over-smoothing while employing deep GCNs. Yet, the issue of deep GCNs is not completely resolved.
In this work, we propose GlobalGCN to fundamentally overcome the dilemma and significantly reduce the computational cost. GlobalGCN generates a topological representation of the entire graph structure via one global attention matrix. It uses matrix exponential to summarize the global depen-
dence between each node, thereby providing each node with global information about its neighborhood nodes. As a result, we can avoid over-smoothing feature vectors by restricting our GCNs over shallow networks (as low as 4 layers).
In summary, we make four contributions. First, we introduce the concept of global attention matrix (GAM) to enable convolution with the largest possible receptive field. Second, we provide mathematical intuitions behind the GAM with respect to its impacts on GNNs. Third, we are able to use the GAM to have a better interpretation of how a graph is structured and how well a graph can be learned. Lastly, we empirically validate our theoretical analysis and show that global topological information helps GNNs to gain higher accuracy with fewer parameters and shallower networks in semi-supervised learning settings.
2 GLOBAL-STRUCTURE-AWARE CONVOLUTION
In this section, we provide both practical and theoretical motivation for our proposed model. Even though the adjacency matrix is able to detect the structure of a graph, it is bounded to its local view and cannot directly incorporate the global characteristic of the graph. Consequently, we define the global attention matrix (GAM) to describe the network topology. Definition 2.1. Consider an undirected graph G = (V,E) where V is a set of vertices, or nodes, and E is a set of edges between vertices. An adjacency matrix A is given by Aij = 1 if there exists an edge e ∈ E connecting the ith node Vi ∈ V and the j th node Vj ∈ V , and 0 otherwise. We define
exp(A) = Σi≥0 Ai
i! (1)
as the global attention matrix (GAM) that describes the global topology of the network.
The intuition behind this definition is as follows. For a given graph G, its adjacency matrix A, and a positive integer k, (Ak)ij describes the number of k-hop paths from node Vi to node Vj in G. A large value of (Ak)ij means more k-hop similarities between node Vi and node Vj . This similarity value can be thereby regarded as an importance weight between node Vi and node Vj (at k-hop level). This intuition is summarized in the following lemma. Lemma 2.1. If there exists a n-hop path between node Vi and node Vj in an undirected graph G = (V,E), exp(A)ij ̸= 0.
The factorial division term in equation (1) performs two tasks. First, it factorially decays the importance weight as the number of hop of paths increases. Therefore, the similarity value for closer nodes in terms of shortest-path distance is naturally favored. Second, due to the factorial division term, the GAM is mathematically stable in the sense that its term converges to 0 rapidly enough so that the total infinite sum is guaranteed to exist.
2.1 CONVERGENCE OF GAM
We use the matrix theory to understand some properties of the GAM, especially its decaying characteristic. We aim to have a very long-distance relationship be reduced as fast as possible because graph dependencies between two majorly distant nodes should be minimum, even if they are connected by certain underlying paths.
Lemma 2.2. For a normalized adjacency matrix à = D̂−1/2ÂD̂−1/2 where  = A + I and D̂ is the degree matrix of Â,
lim k→∞ ∥Ãk∥F k! = 0 (2)
with convergence rate O(C k
k! ) where C is a positive constant and ∥ · ∥F the Frobenius norm.
Proof. Based on the definition of the normalized adjacency matrix, Ãij = Âij√ di √ dj , where di and
dj are given by corresponding diagonal entries of D̂. Since à is symmetric, the decomposition of à is given by
à = QΛQT (3)
where Q is an orthonormal matrix and Λ is a diagonal matrix with the eigenvalues of à on the diagonal entries. Then for a nonnegative integer k,
Ãk = QΛkQT . (4)
Therefore, the Frobenius norm of Ãk/k! is bounded above by
∥Ãk∥F k! = ∥QΛkQT ∥F k! =
√ Tr(Λ2k)
k! ≤
√ N · σmax(|Λ| k)
k! ≤ C1( √ N) · Ck2 k!
(5)
for positive constants C1, C2, where σmax(·), Tr(·) indicate the maximum eigenvalue and the trace of a given matrix respectively. N denotes the number of nodes of the graph associated with the adjacency matrix A and |Λ| = diag(|λ1|, ..., |λN |). Remark. C1 depends on √ N . As the number of nodes of a graph increases, more additive power terms (i.e. Ãk/k!) contribute to the GAM due to the weighting effect of C1. Therefore, longer path dependencies are captured.
Both lemmas suggest that the GAM is able to learn two important features. As the norm of kth power of the adjacency matrix divided by factorial terms is rapidly decaying, the GAM captures not only the connectedness of two nodes regardless of the length of the path but also the similarity between two nodes where the connection by long-distance path is heavily penalized by decay terms.
2.2 GLOBALGCN
Based on our previous theoretical analysis, we propose a novel GCN architecture using the GAM called GlobalGCN, where we replace the adjacency matrix in GCN with the GAM.
H(l+1) = σ(Dropout(exp(A)) ·H(l) · Dropout(W (l))) (6)
where σ is an activation function, W (·) a weight matrix and H(·) a feature matrix. For the experiments, we set σ as ReLU activation function (Agarap (2018)).
Dropout is used over both the GAM and weight matrices for particular purposes. Dropout in the GAM is necessary for two reasons. First, the entry values of the GAM are dominated by a small proportion of large values while the vast majority center around zero. This leads GlobalGCN to have a strong tendency towards learning by dominant edge connections unless dropout is performed over the GAM. Dropout makes it possible to learn less important edges, and thus, the underlying graph structures can be considered. Second, by using dropout, GlobalGCN is trained over different subgraphs at each iteration. The final prediction can be interpreted as an ensemble of subgraph predictions, making the neural network more robust.
In weight matrices, dropout is used for regularization; it prevents neural networks from overly relying on certain neurons.
3 RELATED WORK
GCN (Kipf & Welling (2017)) performs convolution over graphs by first aggregating node features of neighborhoods and then updating the node feature itself with the weight matrix.
H(l+1) = σ(ÃH(l)W (l)) (7)
where à = D̂−1/2ÂD̂−1/2. This approach suffers from over-smoothing as the number of layers becomes larger and thereby cannot incorporate the global topology of a graph.
GAT (Veličković et al. (2018)) updates each feature vector of nodes with fully connected layers and finds the self-attention matrix over each individual node. Afterward, it makes the weighted sum of the updated feature vectors over the neighborhood of each node. This method does not fully rely on the given structural information. Attention introduces extra parameters and creates computation/learning overheads. The spatial and computational complexity increases quadratically
with respect to the number of nodes. Furthermore, there is no guarantee that long-path dependency can be fully captured by the attention matrix.
PPNP (Klicpera et al. (2018)) uses personalized PageRank to generalize the graph structure and uses a multilayer perceptron (MLP) over feature vectors via
H = α(In − (1− α)Ã)−1fθ(X) (8) where X is the input feature matrix and fθ(·) is an MLP. The personalized PageRank term can be interpreted as (In − (1− α)Ã)−1 = Σi≥0((1− α)Ã)i (9) which is a geometric sum of normalized weight matrices. This approach attempts to incorporate graph structural information; however, its weight may not decay fast enough to penalize connections of nodes far from each other.
Clutser-GCN (Chiang et al. (2019)) performs clustering over nodes and formulates random subgraph structures. Afterwards, it restricts neighborhoods over the subgraph structure and forces each node to only learn from neighborhoods inside each cluster. The clustering method is not guaranteed to provide a correct community detection, and if it is incorrectly clustered, the learned sub-graph can be misleading and cannot grasp the topology of the entire graph.
N-GCN (Abu-El-Haija et al. (2020)) trains GCN over multiple n-hop adjacency matrices. It concatenates the feature representation from each branch and uses MLP to predict the long features. Although this method focuses on widening the perspective over n-hop, it is restricted to maximum n-hop neighborhoods and requires much more parameters to train multiple branches using 1 to n-hop adjacency matrices.
DeepGCN (Li et al. (2019)) introduces residual learning (skip connection to avoid gradient vanishing) and k-nearest neighbors clustering to find the local community such that the effect of oversmoothing is minimized. The approach is, however, limited to smaller local communities and is not able to grasp the entire graph structure. In addition, deep structure introduces more parameters.
JKNet (Xu et al. (2018)) combines all intermediate representations as [H(1), ...,H(K)] to learn the new representations over different hop neighborhoods. The authors prove that k-layer GCN is essentially performing random walks, and stacking them relieves the over-smoothing by having multiple random walks. However, this approach is limited to k-hop neighborhoods at most; it cannot keep track of wider neighborhood behaviors.
GraphSAGE (Hamilton et al. (2017)) performs long short-term memory (LSTM) aggregation over local neighborhoods, and updates each node feature vector based on aggregated feature vectors from LSTM. The method is still limited to the 1-hop neighborhood, and thus, is unable to view the global structure of the graph.
DropEdge (Rong et al. (2019)) implements random dropout over adjacency matrix such that H(l+1) = σ(Dropout(Ã)H(l)W (l)). (10)
This method becomes problematic when dropout removes critical edges where the most connection occurs. Consider, for example, a bipartite graph structure where two communities are connected by one singular edge. If that edge is dropped, the connected community is separated and is transformed into a graph with a completely different structure.
RevGNN-Deep (Li et al. (2021)) introduces a deep GCN structure by performing residual learning and normalization over feature vectors. This method requires much more parameters and a long time for training. It is restricted to the 1-hop adjacency matrix, and cannot capture the long-path dependency between two nodes.
The aforementioned papers focus on either going deep while reducing the over-smoothing effect or stacking feature vectors to increase the receptive field of the convolution. Yet, none of them focuses on ensembling multiple adjacency matrices to form a global-level adjacency matrix.
4 EXPERIMENTS
We validate GlobalGCN in semi-supervised document classification in citation networks and conduct multiple experiments over these datasets, as explained below.
4.1 DATASET
We use three citation networks for the semi-supervised learning experiments of GlobalGCN. The statistics of each dataset are summarized in Table 1. The citation network datasets (Cora, Citeseer and Pubmed) (Sen et al. (2008)) contain sparse feature vectors for each document and a list of citation links between documents. We consider all citation links as undirected edges and each document as a node, creating an undirected and unweighted graph G = (V,E) with A as the adjacency matrix of the graph. Each document has a class label, and we use public splits of 20 labels per class and all feature vectors for training.
4.2 EXPERIMENTAL SETUP
We perform Bayesian optimization (Nogueira (2014–)) over GlobalGCN to maximize the validation accuracy by optimizing hyperparameters, e.g. the learning rate, the number of layers, the dimension of hidden layers, and the L2-regularization weight.
We iterate the Bayesian optimization 1000 times, and for each iteration, we train GlobalGCN maximum 400 epochs using Adam. Under the same setting, we train GlobalGCN maximum 200 epochs for Pubmed, as the network is much denser and slower for computation. We use StepLR with a step size of 500 and a gamma of 0.3 for learning rate decay. We keep the best model for each iteration and stop training once the best validation loss is (≥ 10%) larger than the validation loss. We fix the dropout rate for both weight dropout and GAM dropout as 0.6 and initialize all weights with Xavier initialization (Glorot & Bengio (2010)).
We use negative log-likelihood loss with L2 regularization multiplied by the lambda weight.
4.3 BASELINES
We use GCN (Kipf & Welling (2017)), GAT (Veličković et al. (2018)), APPNP (Klicpera et al. (2018)), JKNet (Xu et al. (2018)), N-GCN (Abu-El-Haija et al. (2020)), HGCN (Hu et al. (2019)), and GraphAir (Hu et al. (2020)) as our baseline models. The results are reported in Abu-El-Haija et al. (2020) or Chen et al. (2020), or their respective papers. Our experiment setup is different from theirs as they are tailored to maximize their own accuracy performance. We compare our model with their best possible performance to fairly assess the performance differences.
5 RESULTS
In this section, we evaluate the performance of GlobalGCN against the state-of-the-art (SOTA) GNN models on three semi-supervised learning tasks. We also analyze the properties of the GAM of each dataset for better interpretation of the prediction result.
5.1 COMPARISON WITH SOTA
Table 3 includes experiment results of our GlobalGCN model and the other baseline models. We can see that the prediction accuracy of the GlobalGCN model clearly surpasses the others over all three datasets with much fewer layers (2-4 layers in GlobalGCN). We are able to achieve 85.1% over Cora dataset, 73.0% over Citeseer and 80.1% over Pubmed, outperforming all other models with margins while maintaining lower hidden layer dimensions (43 for Cora, 109 for Citeseer, and 92 for Pubmed).
5.2 EVALUATION OF PROPERTIES OF GAM
In order to grasp a better understanding of the property of the GAM, we analyze the GAMs of 3 different datasets, Cora, Pubmed and Citeseer, and characterize their behavior.
5.2.1 DISTRIBUTION OF ENTRIES OF GAM
Figure 1 illustrates the distribution of entry values of the GAM of each dataset. The similarity among the three datasets is that a majority of entries are located near zeros while the counts decrease exponentially as the entry value increases. This behavior corresponds to the common perception that most graph structures follow the power distribution (Aiello et al. (2001)) where the number of highvalue entries exponentially decays.
Comparing Citeseer with other datasets, Figure 1b shows that the GAM of Citeseer has a nonnegligible amount of big entries around 2.7 separate from the intermediate entry values, while the GAM of Cora or Pubmed has large entry values concentrated at around 1.5.
This can explain why GlobalGCN, or any other model, does not perform well in the Citeseer dataset in comparison with other datasets. There are a few entries of the GAM with distinctively large values which heavily affect the training of GCN on Citeseer. Table 2 shows that the entry values of the GAM of Citeseer has the largest maximum, nonzero mean, and standard deviation among the three datasets, thus confirming that Citeseer has minority edges dominating the learning.
5.2.2 DISTRIBUTION OF EIGENVALUES OF GAM
We perform the singular value decomposition (SVD) over the GAM of each dataset. Figure 2 shows the eigenvalues of the GAM generated by each dataset in descending order. While both Figure 2a and 2b show that there are multiple eigenvectors for the maximum eigenvalue, the GAM of Citeseer in 2b presents a clear salient plateau region of maximum eigenvalues.
This is another possible explanation for why GlobalGCN and all other models perform relatively poorly in Citeseer. The number of eigenvectors for the maximum eigenvalue of the GAM can be related to the number of potential underlying clusters in the graph (Chung (1997)). The GAM of Citeseer has many eigenvectors for the maximum eigenvalue, implying that there is a very rich underlying structure behind the graph. This makes learning more complicated, since GCN or GlobalGCN may focus on learning that substructure instead of capturing the desired global characteristics of the graph.
Figure 3 is a histogram of eigenvalues of the GAM of each dataset. The GAM of Pubmed in Figure 3c has very dense eigenvalue concentrations overall since the graph is fully-connected with much more nodes than the others. Figure 3b shows that for the GAM of Citeseer, eigenvectors with large eigenvalues dominate the distribution of eigenvalues, and this makes the training of GlobalGCN or GNN mainly over clusters associated with those eigenvectors and restricts the field of learning space.
Table 2 supports the explanation since the GAM of Citeseer has the least number of nonzero entries and has a median equal to 0, indicating that most connections are not there. This implies that we have many nodes inside their own clusters without any contact with nodes belonging to other clusters. This complicates the graph structure as learning is not over the entire graph but only over each small cluster.
6 DISCUSSION
6.1 CORE INSIGHTS BEHIND GAM
The GAM can be considered as a weighted sum of fast-decaying filters for the input feature matrix. It captures the similarity level between two nodes in the graph without introducing any extra parameters and is able to capture the global characteristics of any graph. Based on this explanation, we can conclude that GlobalGCN essentially performs both supervised learning and unsupervised learning simultaneously. It clusters the graph based on the adjacency matrix via the GAM and performs supervised learning based on the relationship learned from clustering.
6.2 LIMITATIONS AND FUTURE WORK
We describe several limitations of GlobalGCN and potential future directions for improvements.
6.2.1 LARGE-SCALE TRAINING
The GAM is a dense matrix for our implementation. The size of the matrix increases quadratically with respect to the number of nodes. In contrast, adjacency matrices over large-scale data are usually sparse. Therefore, it is required to utilize the sparsity in order to overcome the memory bottleneck of the GAM computation.
6.2.2 SPECTRAL PRECONDITIONING
According to the experiment results, eigenvalues of the adjacency matrix plays a key role in controlling the converging speed of adjacency matrix power. We can precondition eigenvalues by using the relationship cAx = cλx such that we can modify the size of eigenvalues of the adjacency matrix to enable a wider coverage of neighborhoods. Refer to the Appendix for detailed explanations.
7 CONCLUSION
We have proposed a novel GCN architecture called GlobalGCN which uses the global attention matrix (GAM) from matrix exponential to learn the global topology/structure of the graph. It is able to learn over the graph at the maximum receptive field while taking the similarity between each node into consideration. GlobalGCN shows significant improvement in prediction accuracy over semisupervised learning tasks and is easy to implement. It outperforms SOTA with notable margins while maintaining shallower network architectures (as few as 4 layers) with fewer parameters than most existing GCN architectures.
A APPENDIX
A.1 DISTRIBUTION OF BOUNDS OVER MATRIX NORM
In Lemma 2.2, we provide the upper bound of the Frobenius norm of each term appearing in the infinite sum which defines the GAM (see equation (1)). Here, we test the behavior of the upper bound C k
k! for a positive constant C and a nonnegative integer k. Figure 4a shows the shape of the bound over different values for constant C. This gives an overview of the impact of C over the asymptotic behavior of the bound. As C increases, the bandpass regions become larger, indicating that more neighborhoods are viewed, corresponding to our interpretation of Lemma 2.2. Figure 4b gives a further overview of the behavior of the bound over large C in the log scale. This graph indicates that with an increase of constant terms, the covered neighborhoods would exponentially increase. As a result, we could perform matrix preconditioning over eigenvalues of the normalized adjacency matrix to flexibly adjust the receptive field. | 1. What is the focus of the paper regarding graph convolutional networks?
2. What are the strengths and weaknesses of the proposed approach, particularly in terms of experimental results and theoretical analysis?
3. Do you have any concerns about the claims made in the paper, especially regarding the comparison with deep GCNs?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper leverages GAM, or mathematically the exponential map of the adjacency matrix, to replace the original matrix A and thus devises a new convolutional expression. The authors argue that this network can achieve promising results while maintaining shallow and parameter-efficient. Experiments are conducted on three small datasets.
Strengths And Weaknesses
Strengths:
The method is simple and easy to follow.
The presentation of the paper is clear.
Weaknesses:
The major concern of this manuscript lies in inconvicing experimental results. The method is only evaluated on three small-scale node classification benchmarks while the method is clearly limited to scale on large graphs. While the authors acknowledge this point in the discussion section, I believe this remains a clear weakness of the proposed method, largely limiting its application. The results are obtained with extensive Bayesian optimizaiton on the hyper-parameters, and are also not presented with error bars. It is hard to tell whether the performance increment comes from hyperparameter search or the proposed method. Overall, the experimental evaluation does not meet the bar of ICLR.
Given the limited scope of the experiments, the core claim of the paper, which is "shallow GCNs perform better than deep GCNs, so deep GCNs are probably not needed", appears to be unconvicing and exaggerated. Deep GCNs have been verified to achieve strong performance on large-scale and challeging graph datasets (e.g., the OGB datasets). Even on small datasets like the Planetoid datasets used in this paper, deep GCNs like GCNII [1] already outperform the proposed method, which is also missing in Table 3. In this way, the claims made in the paper might potentially cause misleadings.
Theoretical analyses are not sufficient. The theory part is not sufficiently developed, only conveying limited information on the intuition of the proposed method, including convergence rate of exp(A). A much more interesting theoretical contribution would be how the proposed model is related to over-smoothing or whether the proposed model, although shallow, has greater expressivity than the deep GCNs (c.f. Theorem 2 in [1]).
[1] Chen et al. Simple and Deep Graph Convolutional Networks. In ICML 2021.
Clarity, Quality, Novelty And Reproducibility
The clarity of the paper is fine. The overall quality is limited especially on the experiment part. The novelty is also limited since there are already methods leveraging the ensembling of the exponentials of the adjacency matrix in GCNs (e.g., MixHop [2]), which are not discussed in the paper. The theoretical analyses are also insufficient. The reproducibility is okay given the authors have provided the detailed hyper-parameters.
[2] Sami Abu-El-Haija et al. MixHop: Higher-Order Graph Convolutional Architectures via Sparsified Neighborhood Mixing. In ICML 2019. |
ICLR | Title
Learning Perceptual Compression of Facial Video
Abstract
We propose in this paper a new paradigm for facial video compression. We leverage the generative capacity of GANs such as StyleGAN to represent and compress each video frame (intra compression), as well as the successive differences between frames (inter compression). Each frame is inverted in the latent space of StyleGAN, where the optimal compression is learned. To do so, a diffeomorphic latent representation is learned using a normalizing flows model, where an entropy model can be optimized for image coding. In addition, we propose a new perceptual loss that is more efficient than other counterparts (LPIPS, VGG16). Finally, an entropy model for video inter coding with residual is also learned in the previously constructed latent space. Our method (SGANC) is simple, faster to train, and achieves better results for image and video coding compared to state-of-the-art codecs such as VTM, AV1, and recent deep learning techniques.
1 INTRODUCTION
With the explosion of videoconferencing, the efficient transmission of facial video is a key industrial problem. Image compression can be formulated as an optimization problem with the objective of finding a codec which reduces the bit-stream size for a given distortion level between the reconstructed image at the receiver side and the original one. The distortion mainly occurs due to the quantization in the codec, because the entropy coding method (Rissanen & Langdon, 1979; Gray, 2011) requires the discrete data to create the bit-stream. The compression quality of the codec depends on the modeling of the data distribution close to the real data distribution since the expected optimal code length is lower bounded by the entropy (Shannon, 1948). Existing compression methods suffer from various artefacts. Especially at very low bits-per-pixel (BPP), blocks and blur degrade the image quality, leading to a poorly photorealistic image.
Motivated by the appealing properties of the StyleGAN architecture for high quality image generation, we propose a new compression method for facial images and videos. The intuition is that the GAN latent representation associated to any face image is somehow disentangled, and a perceptual compression method should be easier to train using the latent, especially for inter-coding. In addition, at extremely low BPP, a compressed latent code should always lead to a photorealistic image, hence leading to a compression technique that is perceptually more pleasant. However, we acknowledge that the method relies on a strong hypothesis that a GAN inversion technique can approximate accurately any given image. While this hypothesis is currently strong, we note that the performances of GAN inversion and generation methods have significantly improved recently. We take the leap of faith that the hypothesis will be valid in the near future.
For real natural images, a StyleGAN encoder (GANs inversion) (Richardson et al., 2021; Wei et al., 2021) have been proposed to project any image onto the latent space of StyleGAN. Our objective is to compress the latent representations. Although, retraining the encoder and the generator for different BPP is computationally heavy and costly, especially, using a heavy perceptual distortion losses in the image space (i.e., VGG16, LPIPS). In order to overcome the above challenges, we propose to learn a new proxy latent space representation using a diffeomorphic mapping function. This new latent representation is precisely learned to optimize the compression efficiency. The approach is illustrated in Figure 1. The advantages of such a space are two-fold; (a) it makes the approach very efficient as we can use off the shelf pretrained StyleGAN encoder and generator without the need to retrain them for each quality level. (b) it allows to learn a space dedicated for
compression in which an optimal entropy model can be learned. Finally, we extend this approach to videos and propose a method to optimize for the inter-coding with residuals in the new latent representation, as illustrated in Figure 2.
Our contributions can be summarized as follows:
• We propose a new paradigm for facial video compression, leveraging the generative power of StyleGAN for artifact-free and high quality image compression.
• Without any GAN retraining, we propose to learn a proxy latent representation that effectively fits the entropy model, while using off the shelf pretrained StyleGAN encoder/decoder models. We propose to learn a model for the intra image coding and the inter video coding.
• We propose a new perceptual distortion loss that is more efficient to compute and leverages the multiscale and semantic representation in the latent space of StyleGAN.
• We show high quality and lower perceptually-distorted reconstructed images for low BPP compared to traditional and deep learning based methods for image compression. We show better qualitative and quantitative results compared to the most recent state-of-the-art methods H.265 and VTM for video compression.
The rest of the paper is organised as follows: section 2 presents related works on image and video compression. Our proposed method is detailed in section 3 and section 4 presents the experimental results.
2 RELATED WORK
Image Compression: Traditional image codecs such as JPEG (Wallace, 1992), JPEG2000 (Rabbani, 2002) are carefully human-engineered to achieve very effective performance. However, they are not data-dependent and limited to linear transformations. Recently, deep learning based codecs (Ballé et al., 2017; 2016; Agustsson et al., 2019; Mentzer et al., 2020) have gained significant attraction in the community due to their superior performance and ability to learn the data dependent non-linear transformations. These approaches learn nonlinear transformations of the input data and an entropy model jointly with the objective of optimal trade-off between efficient compression and reconstruction quality. Specifically, they minimize the following rate-distortion loss:
L = −E[log2Pz] + λE[d(x, x̂)], (1)
where x, x̂ are the original and the reconstructed images, z is the corresponding latent representation, Pz(·) is the entropy model of the latent distribution and d(·, ·) is a distortion loss. To achieve optimal rate-distortion trade-off, several methods use variational autoenconder (VAE) type architecture (Ballé et al., 2016; Minnen et al., 2018; Cheng et al., 2020) and achieve impressive performance at higher BPP, however they are sub-optimal at low BPP. Some papers have targeted the entropy model: Ballé et al. (2018) propose to use a hyper prior as a source of side information to capture the spatial dependencies. Minnen et al. (2018) follow a similar approach and augment the hierarchical model with an autoregressive one. Cheng et al. (2020) build the previous approach with a discretized Gaussian mixture entropy model with attention modules.
Other papers have improved the model structure and the architecture (Li et al., 2018; Cheng et al., 2018; 2019) and some used recurrent neural networks (Toderici et al., 2017; Johnston et al., 2018). Others, have leveraged the power of generative adversarial networks (GAN) (Goodfellow et al., 2014) and used adversarial losses, especially for low bitrates (Rippel & Bourdev, 2017; Agustsson et al., 2019). These approaches produce high quality and lower perceptually-distorted image reconstructions (Santurkar et al., 2018; Mentzer et al., 2020), even for high bitrates (Tschannen et al., 2018). These methods are computationally heavy as they require adversarial training for each quality level. We argue that this is not practical especially for data compression, where each quality level requires to train a new model from scratch.
The choice of distortion loss is crucial for better reconstruction, traditionally PSNR or MS-SSIM (Wang et al., 2003) are used. These metrics capture the pixel wise distortion and poorly capture the perceptual distortion. Moreover, Blau & Michaeli (2018) shows that there is a trade-off between pixel wise distortion and perceptual quality. This observation is seen clearly for low BPP, where traditional codecs favor blocking artifacts and deep compression systems show blurred and other types of artifacts.
Several works have been tried to remedy these limitations; motivated by the success of using perceptual losses such as LPIPS (Zhang et al., 2018) or VGG16 (Johnson et al., 2016) in other applications (Ledig et al., 2017; Dosovitskiy & Brox, 2016; Gatys et al., 2016), some papers (Santurkar et al., 2018; Chen et al., 2020) propose to include a perceptual distortion loss in addition to the pixel wise ones. These perceptual distortion losses are based on computationally expensive networks such as VGG16 or learned perceptual metrics such as LPIPS. Moreover, the backbone networks are pretrained for an unrelated and discriminative task, and on different datasets, such as Image classification on ImageNet. To remedy this, we propose an alternative and more efficient perceptual distortion loss in the latent space.
Video Compression: Deep Video Compression systems also minimize the rate-distortion loss equation 1. In addition to the spatial redundancy (SR), temporal redundancy (TR) is reduced by incorporating motion estimation modules. Traditional methods (Wiegand et al., 2003; Sullivan et al., 2012b) rely on handcrafted and block based modules. Recently, deep learning based video compression systems proposed to replace the traditional modules by learned ones (Lu et al., 2019). Motion estimation (Dosovitskiy et al., 2015) and compensation are often used to address TR. Several improvements have been made to reduce TR; Lin et al. (2020) leverage multiple frames to improve the motion compensation, and Hu et al. (2020) use multi resolution flow maps to effectively compress locally and globally. However, training motion estimation modules is difficult since they require large annotated data. Some approaches perform TR reduction in latent space (Feng et al., 2020; Djelouah et al., 2019; Hu et al., 2021). Others propose to do frames interpolation (Djelouah et al., 2019), or an interpolation in the latent space of GANs (Santurkar et al., 2018).
3 LEARNING PROXY REPRESENTATION FOR COMPRESSION
In this section, we describe our proposed method to learn the latent space dedicated for compression. The schematic diagram is illustrated in Figures 1 and 2. The method can be summarized as follows: an input facial video frame is first projected in the latent space of StyleGAN W+ . Our method consists in learning a transformation T from the original latent space W+ to a new proxy representation denoted asW?c . The transformation T is defined as a diffeomorphic mapping (i.e., normalizing flow) and is learnt so that the intra image compression and the inter video compression
are optimal inW?c . It is noted that learning a proxy representation inW?c for compression is also motivated by the fact that the rate distortion loss can not be optimized in the original space W+ without training the encoder/generator. Below, we first briefly describe some background material (section 3.1), and then present our methods for images or intra compression (section 3.2) and for videos or inter compression (section 3.3).
3.1 BACKGROUND
StyleGAN Generator StyleGAN (Karras et al., 2019) is the state of the art unconditional GAN for high quality realistic image generation. It consists of a mapping function that takes a vector sampled from a normal distribution (z ∼ N (0, I)) and maps it to an intermediate latent space W using a fully connected network (M ) before feeding it to multiple stages (i.e., W+ space) of the generator (G) to generate an image (x) from the distribution of the real images (x ∼ px):
x = G(w) w = M(z) z ∼ N (0, I) (2)
It is shown that the latent space of GAN is semantically rich and enjoys several properties such as semantic interpolation (Radford et al., 2016). In addition, the latent vector encoded in W+ space of StyleGAN captures a hierarchical representation of the projected image. In our case, we use StyleGAN2 (Karras et al., 2020) which is an improved version.
StyleGAN Encoder The StyleGAN encoder (Richardson et al., 2021) is a deterministic function denoted as E. Its role is to project a real image into the latent space of StyleGAN (e.g., W or W+), in such a way that the reconstructed image by the StyleGAN generator is minimally distorted (x̂ = G(E(x)) ≈ x). In our case, the image is projected in W+ with dimension of (18 × 512) where each dimension controls a different convolution layer of the StyleGAN generator. Currently the encoding based GAN inversion approaches are not ideal, which explains the difference between the projected and the original image.
3.2 IMAGE COMPRESSION (SGANC)
We assume that we have a pretrained and fixed StyleGAN2 generator G that considers a latent code w ∈ W+ and generates a high resolution image of size 1024× 1024, and an encoder E (pretrained and fixed) that embeds any given image x to the latent codew inW+ such that G(E(x)) ' x. Our objective is to learn a new latent space W?c optimal for image compression. The W?c is obtained using a bijective transformation T : W+ → W?c , which is parametrized as a normalizing flows model (note that our work only requires the bijectivity, as such, no maximum-likelihood term is included in the training objective). T maps a latent code w ∈ W+ into w?c ∈ W?c such that the latent vectors w?c has the minimum entropy and the sufficient information to generate the original image with minimal distortion from the inverted latent code T−1(w?c ) using G. The entropy model is trained on the latent codesw?c ∈ W?c by minimizing the following rate loss after the quantization:
R = −E [ D∑ i log2 pi (Q (T (w))) ] = −E [ D∑ i log2 pi (T (w) + ) ] (3)
For equation 3 to be differentiable, following (Ballé et al., 2017), we relax the hard quantization Q by adding uniform noise to the latent vectors T (w). pi is the ith dimension of the probability density function in W?c , D is the latent vector dimension, w = E(x) where x is the input image and is sampled from a uniform distribution U[−0.5,0.5]. The entropy model p is modeled as the fully factorized entropy model and it is also parameterized by a neural network as in (Ballé et al., 2018). The transformed/quantized latent codes should also have sufficient information to reconstruct the image, and this is achieved by minimizing a distortion loss. In general the distortion loss is computed in the image space, although, we propose a new efficient distortion loss directly in the latent space:
D = d ( w, T−1 (T (w) + ) ) , (4)
where d is any distortion loss, in this paper we use the mean squared error (MSE) loss. The equation 4 can be seen as a perceptual distortion loss, and it is motivated by the fact that the latent space of GANs is semantically rich. We argue that this is true especially for StyleGAN, where the latent code is extracted from several layers of the StyleGAN encoder which allows to capture multiscale
and semantic features/representations. Compared to existing perceptual losses, our loss does not requires to generate the images during training or to compute heavy losses such as VGG16 or LPIPS in the image space. The total loss used to learn our proposed latent spaceW?c is a trade-off between the rate (equation 3) and distortion (equation 4) as shown below
L = R+ λD (5) Where λ is the trade-off parameter, and the transformation T and the entropy model p are jointly optimized. Once the optimization is completed, to create the bit-stream the latent codes inW?c are quantized with Q using rounding operator.
3.3 VIDEO COMPRESSION (SGANC IC)
Here, we propose our approach for video compression, denoted as SGANC IC, using learned inter coding with residuals. Video compression methods and standards rely on motion estimation and compensation modules to leverage the temporal dependencies between frames. As our approach is formulated in the latent space, our inter coding schema is be based on the successive latent differences, since these differences reflect the temporal changes. We argue that this leads to efficient compression due to the good properties of the latent space of GANs (e.g., linear interpolation). Specifically, inter coding with the residuals is performed in the latent space (Figure 2). Given a sequence of frames {x1,x2, ...,xt−1,xt, ...}, a pretrained encoder E is first used to obtain the latent representation. Then, similarly to intra coding, a transformation T is learned to map these frames toW?IC , leading to a sequence of latent codes {w∗1 ,w∗2 , ...,w∗t−1,w∗t , ...}. The mapping T is learnt such that the sequence of latent codes in W?IC is optimal for inter-coding by taking the temporal dependencies into account. Our approach can be summarized as follows (the complete description of the algorithm can be retrieved in appendix A.2):
• The first latent code is coded using the method described in section 3.2: ŵ∗0 (using the same entropy model or preferably another one trained for image compression). The following steps are repeated until the end of the video;
• The difference between two consecutive latent codes are computed and quantized to obtain v̂t = Q(w ∗ t −w∗t−1).
• From the previous reconstructed code, an estimate of the latent code at frame t is obtained as w̄t = ŵ∗t−1 + v̂t.
• The residual between the estimated and the actual latent code is computed and quantized as r̂t = Q(w∗t − w̄∗t ) (for all the frames or each gap (i.e., g frames).
• The quantized difference v̂t and the residual r̂t are compressed using an entropy coding and sent to the receiver.
• On the receiver side, the current latent code is reconstructed from the residual and the estimated latent code (or only from the estimated latent code each g frames): ŵ∗t = w̄ ∗ t +r̂t. • The latent codes inW?IC are remapped toW+ to generate the images using the pretrained StyleGAN2 G.
To learn the new latent spaceW?IC for inter-coding, the transformation T and the entropy model (p) are learned to optimize the rate-distortion loss:
LIC = d ( wt, T −1 (ŵ∗t))− λ(E[ D∑ i log2 pi (v̂t) ] + E [ D∑ i log2 qi (r̂t) ]) (6)
As similar to section 3.2, we replace the quantization Q by adding uniform noise, and the entropy model is modeled as in (Ballé et al., 2018). While equation 6 has two entropy models, we show below, that it is sufficient to learn only one entropy model on the differences v̂t, as r̂t admits the explicit probability distribution known as Irwin-Hall distribution (the proof of the following Lemma 1 can be found in the appendix A.1).
Lemma 1 Let Q(x) = x + be the continuous relaxation of the quantization, where follows the uniform distribution U[−0.5,0.5] . Let w∗0 ∈ Rn,ŵ∗0 = Q(w∗0), and w̄∗t = ŵ∗t−1 + Q(w∗t −w∗t−1) , t = 1, . . . , n. If r̂t = Q(w∗t − w̄∗t ) is the quantized residual defined for every t such that t ≡ 0 (mod g), then r̂t follows the Irwin-Hall distribution with the parameter 3 + (g−1) and the support being shifted by −0.5 ∗ (3 + (g − 1)).
This leads to optimizing our latent space W?IC for the rate-distortion loss only with one entropy model for v̂t by discarding the last term in (equation 6). During test, the explicit distribution of the residuals can be used for entropy coding.
Stage-Specific entropy models: In Karras et al. (2019), it is shown that each stage/layer of the StyleGAN generator corresponds to a specific scale of details. To leverage this hierarchical structure, we propose stage specific entropy models. Specifically, the first layers, which correspond to coarse resolution (e.g., 42−82) affect mainly high level aspects of the image, such the pose and face shape, while the last layers affect the low level aspects such as textures, colors and small micro structures. Here, we propose to leverage this hierarchical structure and weight the distortion λ differently for each layer of the generator (Note that, the latent code in W+ or W?IC consists of 18 latent codes of dimension 512 and each one corresponds to one layer in the generator, hence its dimension is (18, 512)). For practical reasons, we split the 18 layers in three stages (1− 8, 8− 13, 13− 18), and learn one transformation and entropy model for each group. The distortion λ is chosen to be higher for the first layers and decreases subsequently.
4 EXPERIMENTS
4.1 DATASETS
Celeba-HQ (Karras et al., 2018), a dataset consists of 30000 high quality face images of 1024×1024 resolution.
FILMPAC, a dataset consists of 5 video clips with high resolution and the length varies between 60 and 260 frames. These videos can be found on the filmpac website1 by searching their names (FP006734MD02, FP006940MD02, FP009971HD03, FP010363HD03, FP010263HD03).
MEAD (Wang et al., 2020), a high resolution talking face video corpus for many actors with different emotions and poses. The training dataset for inter-coding consists of 2.5 k videos with frontal poses. For evaluation we created MEAD-inter dataset consisting of 10 videos (selected from MEAD) of different actors with frontal pose. We also created MEAD-intra dataset consisting of 200 frames selected from these videos with frontal pose for evaluating image compression methods.
Dataset preprocessing: All the frames are cropped and aligned using the same prepossessing method as that of FFHQ dataset (Karras et al., 2019), on which the StyleGAN is pretrained. As we compare the reconstructed image with the projected one for SGANC, we project all the frames (encode the original images and reconstruct them using StyleGAN2). All frames are of high resolution (1024× 1024).
4.2 IMPLEMENTATION DETAILS
We used StyleGAN2 generator (G) (Karras et al., 2020) pretrained on FFHQ dataset (Karras et al., 2019). The images are encoded in W+ using a pretrained StyleGAN2 encoder (E) (Richardson et al., 2021). The parameters of the generator and the encoder remain fixed in all the experiments. The latent vector dimension inW+ andW?c is 18× 512. The mapping function T is modeled using RealNVP architecture (Dinh et al., 2017) without batch normalization. It consists of 13 coupling layers, and each coupling layer consists of 3 fully connected (FC) layers for the translation function and
1https://filmpac.com/
3 FC for the scale one with LeakyReLU as hidden activation and Tanh as output one (Total number of trainable parameters=20.5 M). For the entropy model, we have used the fully factorized entropy model (Ballé et al., 2018) based on the implementation from the CompressAI library (Bégaint et al., 2020). For the stage specific entropy model, λ is kept constant for the first stage and then decreased linearly to be 1e−2 smaller for the last layer. For all the experiments, we used Adam optimizer with β1 = 0.9 and β2 = 0.999, learning rate=1e−4 and the batch size=8. Once the training is completed, we used Range Asymmetric Numeral System coder (Duda, 2013) to obtain the bit-stream.
Training: We encoded all the images once, and the training is performed solely using the latent codes. Thus, we do not need the generator nor the encoder during training, which makes the approach light and fast to train. For SGANC, We train on Celeba-HQ dataset. For SGANC IC, we trained on 2.5 k videos from the MEAD dataset (Wang et al., 2020), where each batch contains video slices of size = 9 frames. All the frames are preprocessed as in section 4.1.
4.3 RESULTS
In this section we present the experimental results of our proposed method for image and video compression. We used the following metrics to assess the quality of the compression methods: Peak Signal to Noise Ratio (PSNR), Multi Scale Structural Similarity (MS-SSIM) (Wang et al., 2003), Learned Perceptual Image Patch Similarity (LPIPS) (Zhang et al., 2018), and the Perceptual Information Metric (PIM) with a mixture of 5 Gaussians (Bhardwaj et al., 2020).
LPIPS and PIM are two perceptual metrics; PIM is recently proposed and follow an informationtheoretic approach, inspired from the human perceptual system (Bhardwaj et al., 2020). The authors of LPIPS, on the other hand, show that it is consistent with the human perception. We report the size of the compressed images in bits per pixel (BPP), which is common in evaluating image and video compression.
To exclude the distortion coming from the GAN inversion technique, all the frames are projected (i.e., inverted) using the encoder. The projected frames are used for all approaches in the main paper as a baseline to compute the distortion metrics. For the sake of completion, we also present evaluation results with the original images in the appendix A.4.
4.3.1 IMAGE COMPRESSION
We compare our method with the most recent state-of-the-art codecs such as VTM (VTM, 2020), AV1 (AV1, 2018) and deep compression models such as scale Hyper Prior (HP) (Ballé et al., 2018), factorized entropy model with scale and mean hyperpriors (MeanHP) (Minnen et al., 2018) and the anchor variant of Cheng et al. (2020). MeanHP and (Cheng et al., 2020) are trained with the MSSSIM objective and HP with the MSE (we found that this leads to better results) on the Celeba-HQ dataset. All the methods are evaluated on the independent evaluation dataset: MEAD-Intra and FILMPAC.
Quantitative results: We present the rate distortion curves for different BPP on the MEAD-intra dataset in Figure 3. As can be observed, for the perceptual metrics LPIPS and PIM, our method significantly outperforms the state-of-the-art methods by a large margin. Deep learning based methods are better than traditional codecs for high BPP but they are inferior for lower BPP. This observation highlights the potential of our method to obtain reconstruction with good perceptual quality. For classical metrics, our method is the best at medium and high BPP for MS-SSIM, and has the highest PSNR at high BPP. It is noted that the deep learning based competitors trained with MS-SSIM objective, are still not able to outperform the traditional codecs VTM and AV1. The above observations also holds on the evaluation of the FILMPAC dataset, please refer to figure 17 in the appendix.
Qualitative results: Next we discuss the qualitative results, shown in figure 4. The visual inspection reveals that at a comparable BPP, AV1 introduces blocking artifacts while VTM generates blurry results. The reconstruction of MeanHP is not sharp enough and present distortion in color. Our method delivers perfect reconstruction compared to the projected image and preserves all the details. For example, the details of the eye brows are preserved where as the existing methods failed. Additional images are displayed in appendix A.4.
4.3.2 VIDEO COMPRESSION
We compare our method (SGANC IC) with the most recent and performing two state-of-the-art methods: Versatile Video Coding Test Model (VTM) (VTM, 2020) and H.265 standard (Sullivan et al., 2012a). We used the official implementation for both VTM (Random access with GOP=16) and H.265 (FFmpeg Library). Each method is evaluated on the MEAD-inter dataset.
Quantitative results: Figure 6 presents the quantitative evaluation curves on the MEAD-inter dataset. The computed metrics are first averaged across the frames in each video, and then averaged across all the videos. As similar to the observations in section 4.3.1, our proposed method for video compression achieved impressive performance and significantly outperformed the state-of-the-art methods with respect to perceptual metrics. Especially, with the perceptual distortion metric PIM, our method is significantly better. In terms of PSNR, our method is comparable with H.265 at low bit-rate and achieved better performance at high BPP with both methods.
Qualitative results: Figure 5 compares the reconstruction quality of all the methods with a comparable BPP on a single frame extracted from the compressed video. Our method is almost artifacts free, photorealistic and perceptually more pleasant, while VTM leads to blurry results and H.265 exhibits blocking artifacts. For more visual results please refer to the appendix A.4. Note that, despite the high quality of our method, the reconstruction of StyleGAN (projected vs original) is still not perfect. Once this factor is eliminated, the distortion coming from the quantization (projected vs our method) is negligible which makes our method very promising taking into account of the exponential improvement of GAN generation and inversion.
The ablation studies related to the different distortion losses, the alternative choices of transformation T and other design choices are discussed in the appendix A.3 due to the space limitation.
5 CONCLUSION
We have proposed in this paper a new paradigm for facial image and video compression based on GANs. Our framework is efficient to train and leads to perceptually competitive results compared to the most efficient state-of-the-art compression systems. We believe that at low bitrates, our solution leads to a different and more acceptable type of distortion, since the reconstructed image is very sharp and photorealistic. Our approach is not restricted for faces since GANs have been proposed for various natural objects. For a specific category of object for which a GAN has not been trained already, our approach could be used after training the specific GAN as well as the compression bottelneck. The main limitation of our method is the approximation of any input image using StyleGAN. We believe that the continuous and impressive improvements of GANs inversion and generation will limit this limitation in the near future.
A APPENDIX
The Appendix is organized as follows: in section A.1, we prove that the distribution of residuals follows a known Irwing-Hall distribution, in section A.2 we detail the algorithms for training and testing of the inter coding with residual approach. In section A.3 we detail the ablation study for image and video compression. Finally, in section A.4 we provide more results.
A.1 EXPLICIT DISTRIBUTION OF THE RESIDUALS: PROOF OF LEMMA 1
Let Q(x) = x + be the continuous relaxation of the quantization, and follows the uniform distribution U[−0.5,0.5]. Let the t be the frame index ranging from {0, 1, . . . ,K − 1}.
Let w∗0 ∈ Rn and ŵ∗0 = Q(w∗0). Let us define the following v̂t = Q(w ∗ t −w∗t−1) = w∗t −w∗t−1 + , (7)
w̄t = ŵ ∗ t−1 + v̂t,
Similarly, we define
r̂t = { Q(w∗t − w̄∗t ), ∀ t ≡ 0 (mod g) 0 else
(8)
Now we prove that r̂t follows the Irwin-Hall or the uniform sum distribution (Johnson et al., 1995). we have:
r̂t = Q(w ∗ t − w̄∗t ) = w∗t − w̄∗t + 1 = w∗t − ( ŵ∗t−1 + v̂t ) + 1
= w∗t − ( ŵ∗t−1 +w ∗ t −w∗t−1 + 1 ) + 2 = w∗t − ŵ∗t−1 −w∗t +w∗t−1 − 2 + 1 = w∗t−1 − ŵ∗t−1 − 2 + 1 (9)
For t ≡ 0 (mod g), the reconstructed latent code is computed from the residual, and can be written as:
ŵ∗t = r̂t + w̄ ∗ t
= Q(w∗t − w̄∗t ) + w̄∗t = w∗t − w̄∗t + + w̄∗t = w∗t + (10)
Otherwise, it is the estimated latent code w̄∗t , thus ŵ ∗ t can be written as:
ŵ∗t = { w∗t + , ∀ t ≡ 0 (mod g) w̄∗t else
(11)
For g = 1, equation 9 becomes: r̂t = w ∗ t−1 − ŵ∗t−1 − 2 + 1
= w∗t−1 −w∗t−1 + 3 − 2 + 1 = 3 − 2 + 1 (12)
For g > 1, Let us define the following quantity:
m̂t = { r̂t, ∀t ≡ 0 (mod g) Q(w∗t − w̄∗t ) else
(13)
Using equation 11 and equation 13: r̂t = w ∗ t−1 − ŵ∗t−1 − 2 + 1
= w∗t−1 − w̄∗t−1 − 2 + 1 = m̂t−1 − 2 = ... = m̂t−(g−1) − g−1∑ i=1 i (14)
Similarly to equation 9, we can replace m̂t−(g−1):
r̂t = w ∗ t−g − ŵ∗t−g + 1 − g−1∑ i=1 i (15)
As t− g ≡ 0 (mod g) (as the residual is used only each t ≡ 0 (mod g)), from equation 11, we can replace ŵ∗t−g by its relaxed approximation:
r̂t = w ∗ t−g − ( w∗t−g + 2 ) + 1 − g−1∑ i=1 i
= 1 − 2 + 3 − g−1∑ i=1 i (16)
Which is the sum of 3 + (g − 1) independent random variables following the uniform distribution. We showed r̂t follows the uniform sum distribution, which is the Irwin-Hall distribution IH(x;n) with parameter n = 3 + (g− 1) and x ∈ [−n× 0.5, n× 0.5]. Since the is the uniform distribution U[−0.5,0.5], in our case the support of the Irwin-Hall distribution is shifted by −n× 0.5.
A.2 INTER CODING (SGANC IC)
In this section we present the algorithms for video compression using inter coding with residual during train and test:
Algorithm .1: SGANC IC: during test Result: Compressed frames sequence: {x̂1, x̂2, ..., x̂t−1, x̂t, ..., x̂N}
1 Initialization: frames sequence: {x1,x2, ...,xt−1,xt, ...,xN}, N=number of frames, Encoder (E), Generator (G), Transformation T , entropy coder (EC) and decoder (ED), quantizer (Q), g for residual coding; 2 ŵ∗0 = ED(EC(Q(T (E(x0)))) ; // Intra Coding of the first frame 3 t = 1; 4 while t < N do 5 w∗t = T (E(xt)) ; // Encode the frames and map them to W?c 6 w∗t−1 = T (E(xt−1)); 7 v̂t = ED(EC(Q(w ∗ t −w∗t−1))) ; //Quantize, Compress and Decompress (Receiver) the differences 8 w̄∗t = ŵ ∗ t−1 + v̂t ; // Compute an estimate of the latent code
9 if t%g == 0 then 10 r̂t = ED(EC(Q(w ∗ t − w̄∗t ))) ; // Quantize, Compress and Decompress (Receiver) the residual 11 ŵ∗t = w̄ ∗ t + r̂t ; // Reconstruct the latent code 12 else 13 ŵ∗t = w̄ ∗ t ; 14 end 15 x̂t = G(T−1(ŵ∗t )) ; // Reconstruct the image 16 t = t+ 1; 17 end
Algorithm .2: SGANC IC: during training Result: Transformation (T ), Entropy model (p)
1 Initialization: video dataset encoded as latent codes, where each video= {w1,w2, ...,wt−1,wt, ...}, number of frames in each batch (N ), dataset size (S), Encoder (E), Generator (G); 2 while i < S do 3 t, L = 1, 0; 4 while t < N do 5 w∗t , w ∗ t−1 = T (wt), T (wt−1) ; // Map the latent codes to W?c 6 v̂t = Q(w ∗ t −w∗t−1) ; // Quantize (adding noise) the differences 7 w̄∗t = ŵ ∗ t−1 + v̂t ; // Compute an estimate of the latent code 8 ŵ∗t = w̄ ∗ t ;
9 L = L+ L ; // Compute the loss L (equation 17) 10 t = t+ 1; 11 end 12 Update the parameters of T and p to minimize L; 13 i = i+ 1; 14 end
A.3 ABLATION STUDY
In this section we detail the ablation study for image and video compression.
A.3.1 IMAGE COMPRESSION (DISTORTION LOSS)
In this section we compare different choices of the distortion loss. Specifically, we compare the MSE loss in the image space (Img. D), the MSE loss in the latent spaceW+ (Lat. D), the combination of both (Lat.-Img. D), the LPIPS loss in the image space (LPIPS D) and the combination of the MSE and the LPIPS losses in the image space (LPIPS-Img. D). The implementation details are the same as in section 4.2 except for the choice of the distortion loss and we use a batch size of 4 when training with LPIPS. We compare on both MEAD intra and FILMPAC dataset.
Results: From Figures 7, 8, we can notice that our loss (Lat. D) outperforms the the MSE and the LPIPS distortion in the image space. Moreover, the loss in the image space produces some artifacts and images are blurred while the loss in the latent space is almost artifacts free. There is no benefit of using the loss in the image space in addition to ours. Note that, the training using only the latent loss is faster than others (e.g., the training took; 5 days (LPIPS.-Img. D), 4 days (Lat.-Img. D) and 6 hours (Lat. D)) and occupies smaller space in GPU (e.g., 24 GB with batch size=4 for LPIPS.-Img. D, 10 GB with batch size =4 for Img. D and Lat.-Img. D and less than 2 GB with batch size =8 for Lat. D).
A.3.2 IMAGE COMPRESSION (TRANSFORMATION T )
In this section, we investigate the importance of using a bijective transformation (i.e., Normalizing Flows). To this end, we replace the Real NVP model with an autoencoder (AE) and retrain it with the entropy model on Celeba-HQ with the same implementation details as in section 4.2. The AE consists of 7 layers with the same dimension (i.e., 512) for the encoder and 7 layers with the same dimension for the decoder with ReLU activation functions.
Results: From Figure 9, we can notice that parametrizing the transformation T as Normalizing Flows (i.e., Real NVP) leads to better results. For instance, some facial attributes are changed when using an AE (such as the age and the skin color), as well as the person’s identity.
A.3.3 VIDEO COMPRESSION (INTER CODING: EXPLICIT SPARSIFICATION)
Having only few dimensions that change between two consecutive latent codes is efficient for entropy coding. Thus, we have investigated how to explicitly sparsify these differences and add an L1 regularization on the latent codes differences during training. The loss becomes:
LIC−L1 = LIC + λL1 |w∗t−1 −w∗t | (17)
Note that, as the transformation T and the entropy model are trained jointly, we expect that the latent codes inW?c to be transformed to fit efficiently the entropy model. In the following, we assess whether explicit sparsification brings additional improvement (Section A.3.4).
A.3.4 VIDEO COMPRESSION (INTER CODING)
In this section we present the ablation study for SGANC IC, in which we investigate the effect of the following parameters; parameter g for the residual coding (g), L1 regularization equation 17, residual coding (res) vs intra coding (intra) each g, stage specific entropy model; using different entropy models and distortion λ for the different stages of StyleGAN2 (SS). We thus compare the following variants:
• SGANC IC intra g10: replacing the residual coding at each g = 10 by intra coding using one of the image processing models that we train in section 3.2.
• SGANC IC res g10: doing residual coding each g = 10. • SGANC IC res g80: doing residual coding each g = 80. • SGANC IC res g10 L1: Adding the L1 regularization equation 17 during training and doing
residual coding each g = 10. • SGANC res g10 SS: Using 3 entropy models and 3 T models for each stage of the Style-
GAN2 (1-7, 7-13, 13-18). In addition, training with layer specific distortion lambda; λ = wλ (where w = 1 for the first stage and decrease from 1 to 0.01 for the second and third stage.
• SGANC res g10 SS L1: residual coding each g = 10, stage specific entropy models and L1 regularization.
• SGANC res g2 SS: residual coding each g = 2, stage specific entropy. • SGANC res g1 SS L1: Same as before but doing a residual coding each frame (g = 1).
From Figure 10, we can notice the following:
• Decreasing g leads to better results, at the expense of increased BPP especially for g = 1. • Significant improvements is obtained by performing residual coding (SGANC IC intra g10
vs SGANC IC res g10) and using stage specific entropy models (SGANC IC res g10 vs SGANC IC res g10 SS).
• Only a marginal improvement was obtained by adding the L1 regularization during training (SGANC res g10 vs SGANC res g10 L1). In addition, the improvement becomes negligible when using SS entropy models (SGANC res g10 SS vs SGANC res g10 SS L1).
A.4 MORE RESULTS
In this section we show more quantitative and qualitative results for image and video compression.
A.4.1 IMAGE COMPRESSION
In this section we show the results for image compression (Figures 11, 12, 15, 16, 13, 14). Contrary to the results presented in the main paper, we here present for sake of completeness the results by comparing to the original image (except for our method).
A.4.2 VIDEO COMPRESSION
In this section, we show quantitative and qualitative results for video compression. Contrary to our main results in the paper, we here compress the original video and compute the metrics with original frames (except for our approach still compared to the projected frames).
From Figures 18, 20, 21, 22 and 23, we can notice that VTM and H.265 introduces blocking and blurring artifacts which is not case for our approach SGANC IC. We can also notice similar observations in Figures 24, 25, 26 and 27. From Figure 19, we can notice that our methods are better perceptually (LPIPS and MS-SSIM) than VTM and H.265. | 1. What is the main contribution of the paper regarding image and video compression?
2. What are the strengths and weaknesses of the proposed method compared to prior learning-based compression methods?
3. How does the reviewer assess the validity and reliability of the empirical results, considering factors that may favor the proposed method?
4. What are the limitations and potential biases of the baselines used for comparison, and how could their implementation impact the evaluation?
5. What are the assumptions and hypotheses underlying the proposed method, and how do they affect its effectiveness and generalizability?
6. How can the authors improve the discussion of the pros and cons of the proposed method, including a more nuanced analysis of the role of StyleGAN in the approach?
7. What additional information or experiments could help better evaluate the effectiveness and perceptual quality of the proposed method?
8. Are there any minor suggestions or improvements that could enhance the clarity and completeness of the paper? | Summary Of The Paper
Review | Summary Of The Paper
This paper introduces a new learning based approach for compressing facial images / videos. The main idea is to exploit off-the-shelf image generation models like StyleGAN as the image prior and learn to compress the latent code generated by the corresponding image inversion model (e.g. StyleGAN encoder). Compared with existing works in learned image compression, the proposed method re-uses existing encoder and decoder instead of learning them jointly with the entropy coding model. The authors further extend the approach to video coding by compressing the residual of the latent code between adjacent frames. Empirical results on both static images and videos show that the proposed method outperforms state-of-the-art codecs in perceptual metrics such as LPIPS and PIM.
Review
This paper exploits the idea of using StyleGAN as the image prior to improve image and video compression. While the empirical results seem promising, some important information are not sufficiently covered in the paper:
It is unclear why the proposed method performs better than prior learning based compression methods. The main claim of this paper is that off-the-shelf encoder and decoder perform better than those that are learned end-to-end in learned image compression. This claim is somehow counterintuitive and should be examined and justified more carefully. For example, does the results imply that there's not enough data to learn the encoder, decoder, and entropy coding end-to-end, 2) existing models do not have sufficient capacity for optimal performance, 3) existing works use sub-optimal loss for compression, etc.
Also, there are some factors that may favor the proposed method in the evaluation and should be excluded: 1) the proposed method is trained on more training data, i.e. FFHQ + Celeba-HQ than the baselines, 2) the quantitative results are computed on images generated by StyleGAN, which have a different distribution compared with real images on which the baselines are optimized.
Finally, the authors should provide sufficient details about the baselines' implementation, e.g. the parameters for traditional codecs, how the models were optimized for learning based methods.
There is no discussion about the limitation of the StyleGAN based encoder / decoder. The approach is based on two hypotheses: 1) every image can be generated by StyleGAN, and 2) the inversion model can perfectly invert the image to a latent code for StyleGAN. These are very strong hypotheses, as mentioned in the paper, and there's little information about how likely they are true. Therefore, they should be taken into consideration when discussing the pros and cons of the proposed method. For example, it is not fair to evaluate the results using the output of StyleGAN, as StyleGAN might alter the data distribution while the goal should be compressing the real data. Also, the authors should try to discuss and / or quantify how well existing StyleGAN model and StyleGAN encoder cover real data and how they affect perceptual quality. For example, how significant the difference between "Original" and "Projected" in Figure 12 is? This information is necessary for evaluating the effectiveness of the proposed method.
Besides the questions above, there are also some minor suggestions for the paper:
The qualitative results are hard to read even after zooming-in. A higher resolution version in the appendix might help.
A short description for the references in the implementation details may help the completeness for the paper. |
ICLR | Title
Learning Perceptual Compression of Facial Video
Abstract
We propose in this paper a new paradigm for facial video compression. We leverage the generative capacity of GANs such as StyleGAN to represent and compress each video frame (intra compression), as well as the successive differences between frames (inter compression). Each frame is inverted in the latent space of StyleGAN, where the optimal compression is learned. To do so, a diffeomorphic latent representation is learned using a normalizing flows model, where an entropy model can be optimized for image coding. In addition, we propose a new perceptual loss that is more efficient than other counterparts (LPIPS, VGG16). Finally, an entropy model for video inter coding with residual is also learned in the previously constructed latent space. Our method (SGANC) is simple, faster to train, and achieves better results for image and video coding compared to state-of-the-art codecs such as VTM, AV1, and recent deep learning techniques.
1 INTRODUCTION
With the explosion of videoconferencing, the efficient transmission of facial video is a key industrial problem. Image compression can be formulated as an optimization problem with the objective of finding a codec which reduces the bit-stream size for a given distortion level between the reconstructed image at the receiver side and the original one. The distortion mainly occurs due to the quantization in the codec, because the entropy coding method (Rissanen & Langdon, 1979; Gray, 2011) requires the discrete data to create the bit-stream. The compression quality of the codec depends on the modeling of the data distribution close to the real data distribution since the expected optimal code length is lower bounded by the entropy (Shannon, 1948). Existing compression methods suffer from various artefacts. Especially at very low bits-per-pixel (BPP), blocks and blur degrade the image quality, leading to a poorly photorealistic image.
Motivated by the appealing properties of the StyleGAN architecture for high quality image generation, we propose a new compression method for facial images and videos. The intuition is that the GAN latent representation associated to any face image is somehow disentangled, and a perceptual compression method should be easier to train using the latent, especially for inter-coding. In addition, at extremely low BPP, a compressed latent code should always lead to a photorealistic image, hence leading to a compression technique that is perceptually more pleasant. However, we acknowledge that the method relies on a strong hypothesis that a GAN inversion technique can approximate accurately any given image. While this hypothesis is currently strong, we note that the performances of GAN inversion and generation methods have significantly improved recently. We take the leap of faith that the hypothesis will be valid in the near future.
For real natural images, a StyleGAN encoder (GANs inversion) (Richardson et al., 2021; Wei et al., 2021) have been proposed to project any image onto the latent space of StyleGAN. Our objective is to compress the latent representations. Although, retraining the encoder and the generator for different BPP is computationally heavy and costly, especially, using a heavy perceptual distortion losses in the image space (i.e., VGG16, LPIPS). In order to overcome the above challenges, we propose to learn a new proxy latent space representation using a diffeomorphic mapping function. This new latent representation is precisely learned to optimize the compression efficiency. The approach is illustrated in Figure 1. The advantages of such a space are two-fold; (a) it makes the approach very efficient as we can use off the shelf pretrained StyleGAN encoder and generator without the need to retrain them for each quality level. (b) it allows to learn a space dedicated for
compression in which an optimal entropy model can be learned. Finally, we extend this approach to videos and propose a method to optimize for the inter-coding with residuals in the new latent representation, as illustrated in Figure 2.
Our contributions can be summarized as follows:
• We propose a new paradigm for facial video compression, leveraging the generative power of StyleGAN for artifact-free and high quality image compression.
• Without any GAN retraining, we propose to learn a proxy latent representation that effectively fits the entropy model, while using off the shelf pretrained StyleGAN encoder/decoder models. We propose to learn a model for the intra image coding and the inter video coding.
• We propose a new perceptual distortion loss that is more efficient to compute and leverages the multiscale and semantic representation in the latent space of StyleGAN.
• We show high quality and lower perceptually-distorted reconstructed images for low BPP compared to traditional and deep learning based methods for image compression. We show better qualitative and quantitative results compared to the most recent state-of-the-art methods H.265 and VTM for video compression.
The rest of the paper is organised as follows: section 2 presents related works on image and video compression. Our proposed method is detailed in section 3 and section 4 presents the experimental results.
2 RELATED WORK
Image Compression: Traditional image codecs such as JPEG (Wallace, 1992), JPEG2000 (Rabbani, 2002) are carefully human-engineered to achieve very effective performance. However, they are not data-dependent and limited to linear transformations. Recently, deep learning based codecs (Ballé et al., 2017; 2016; Agustsson et al., 2019; Mentzer et al., 2020) have gained significant attraction in the community due to their superior performance and ability to learn the data dependent non-linear transformations. These approaches learn nonlinear transformations of the input data and an entropy model jointly with the objective of optimal trade-off between efficient compression and reconstruction quality. Specifically, they minimize the following rate-distortion loss:
L = −E[log2Pz] + λE[d(x, x̂)], (1)
where x, x̂ are the original and the reconstructed images, z is the corresponding latent representation, Pz(·) is the entropy model of the latent distribution and d(·, ·) is a distortion loss. To achieve optimal rate-distortion trade-off, several methods use variational autoenconder (VAE) type architecture (Ballé et al., 2016; Minnen et al., 2018; Cheng et al., 2020) and achieve impressive performance at higher BPP, however they are sub-optimal at low BPP. Some papers have targeted the entropy model: Ballé et al. (2018) propose to use a hyper prior as a source of side information to capture the spatial dependencies. Minnen et al. (2018) follow a similar approach and augment the hierarchical model with an autoregressive one. Cheng et al. (2020) build the previous approach with a discretized Gaussian mixture entropy model with attention modules.
Other papers have improved the model structure and the architecture (Li et al., 2018; Cheng et al., 2018; 2019) and some used recurrent neural networks (Toderici et al., 2017; Johnston et al., 2018). Others, have leveraged the power of generative adversarial networks (GAN) (Goodfellow et al., 2014) and used adversarial losses, especially for low bitrates (Rippel & Bourdev, 2017; Agustsson et al., 2019). These approaches produce high quality and lower perceptually-distorted image reconstructions (Santurkar et al., 2018; Mentzer et al., 2020), even for high bitrates (Tschannen et al., 2018). These methods are computationally heavy as they require adversarial training for each quality level. We argue that this is not practical especially for data compression, where each quality level requires to train a new model from scratch.
The choice of distortion loss is crucial for better reconstruction, traditionally PSNR or MS-SSIM (Wang et al., 2003) are used. These metrics capture the pixel wise distortion and poorly capture the perceptual distortion. Moreover, Blau & Michaeli (2018) shows that there is a trade-off between pixel wise distortion and perceptual quality. This observation is seen clearly for low BPP, where traditional codecs favor blocking artifacts and deep compression systems show blurred and other types of artifacts.
Several works have been tried to remedy these limitations; motivated by the success of using perceptual losses such as LPIPS (Zhang et al., 2018) or VGG16 (Johnson et al., 2016) in other applications (Ledig et al., 2017; Dosovitskiy & Brox, 2016; Gatys et al., 2016), some papers (Santurkar et al., 2018; Chen et al., 2020) propose to include a perceptual distortion loss in addition to the pixel wise ones. These perceptual distortion losses are based on computationally expensive networks such as VGG16 or learned perceptual metrics such as LPIPS. Moreover, the backbone networks are pretrained for an unrelated and discriminative task, and on different datasets, such as Image classification on ImageNet. To remedy this, we propose an alternative and more efficient perceptual distortion loss in the latent space.
Video Compression: Deep Video Compression systems also minimize the rate-distortion loss equation 1. In addition to the spatial redundancy (SR), temporal redundancy (TR) is reduced by incorporating motion estimation modules. Traditional methods (Wiegand et al., 2003; Sullivan et al., 2012b) rely on handcrafted and block based modules. Recently, deep learning based video compression systems proposed to replace the traditional modules by learned ones (Lu et al., 2019). Motion estimation (Dosovitskiy et al., 2015) and compensation are often used to address TR. Several improvements have been made to reduce TR; Lin et al. (2020) leverage multiple frames to improve the motion compensation, and Hu et al. (2020) use multi resolution flow maps to effectively compress locally and globally. However, training motion estimation modules is difficult since they require large annotated data. Some approaches perform TR reduction in latent space (Feng et al., 2020; Djelouah et al., 2019; Hu et al., 2021). Others propose to do frames interpolation (Djelouah et al., 2019), or an interpolation in the latent space of GANs (Santurkar et al., 2018).
3 LEARNING PROXY REPRESENTATION FOR COMPRESSION
In this section, we describe our proposed method to learn the latent space dedicated for compression. The schematic diagram is illustrated in Figures 1 and 2. The method can be summarized as follows: an input facial video frame is first projected in the latent space of StyleGAN W+ . Our method consists in learning a transformation T from the original latent space W+ to a new proxy representation denoted asW?c . The transformation T is defined as a diffeomorphic mapping (i.e., normalizing flow) and is learnt so that the intra image compression and the inter video compression
are optimal inW?c . It is noted that learning a proxy representation inW?c for compression is also motivated by the fact that the rate distortion loss can not be optimized in the original space W+ without training the encoder/generator. Below, we first briefly describe some background material (section 3.1), and then present our methods for images or intra compression (section 3.2) and for videos or inter compression (section 3.3).
3.1 BACKGROUND
StyleGAN Generator StyleGAN (Karras et al., 2019) is the state of the art unconditional GAN for high quality realistic image generation. It consists of a mapping function that takes a vector sampled from a normal distribution (z ∼ N (0, I)) and maps it to an intermediate latent space W using a fully connected network (M ) before feeding it to multiple stages (i.e., W+ space) of the generator (G) to generate an image (x) from the distribution of the real images (x ∼ px):
x = G(w) w = M(z) z ∼ N (0, I) (2)
It is shown that the latent space of GAN is semantically rich and enjoys several properties such as semantic interpolation (Radford et al., 2016). In addition, the latent vector encoded in W+ space of StyleGAN captures a hierarchical representation of the projected image. In our case, we use StyleGAN2 (Karras et al., 2020) which is an improved version.
StyleGAN Encoder The StyleGAN encoder (Richardson et al., 2021) is a deterministic function denoted as E. Its role is to project a real image into the latent space of StyleGAN (e.g., W or W+), in such a way that the reconstructed image by the StyleGAN generator is minimally distorted (x̂ = G(E(x)) ≈ x). In our case, the image is projected in W+ with dimension of (18 × 512) where each dimension controls a different convolution layer of the StyleGAN generator. Currently the encoding based GAN inversion approaches are not ideal, which explains the difference between the projected and the original image.
3.2 IMAGE COMPRESSION (SGANC)
We assume that we have a pretrained and fixed StyleGAN2 generator G that considers a latent code w ∈ W+ and generates a high resolution image of size 1024× 1024, and an encoder E (pretrained and fixed) that embeds any given image x to the latent codew inW+ such that G(E(x)) ' x. Our objective is to learn a new latent space W?c optimal for image compression. The W?c is obtained using a bijective transformation T : W+ → W?c , which is parametrized as a normalizing flows model (note that our work only requires the bijectivity, as such, no maximum-likelihood term is included in the training objective). T maps a latent code w ∈ W+ into w?c ∈ W?c such that the latent vectors w?c has the minimum entropy and the sufficient information to generate the original image with minimal distortion from the inverted latent code T−1(w?c ) using G. The entropy model is trained on the latent codesw?c ∈ W?c by minimizing the following rate loss after the quantization:
R = −E [ D∑ i log2 pi (Q (T (w))) ] = −E [ D∑ i log2 pi (T (w) + ) ] (3)
For equation 3 to be differentiable, following (Ballé et al., 2017), we relax the hard quantization Q by adding uniform noise to the latent vectors T (w). pi is the ith dimension of the probability density function in W?c , D is the latent vector dimension, w = E(x) where x is the input image and is sampled from a uniform distribution U[−0.5,0.5]. The entropy model p is modeled as the fully factorized entropy model and it is also parameterized by a neural network as in (Ballé et al., 2018). The transformed/quantized latent codes should also have sufficient information to reconstruct the image, and this is achieved by minimizing a distortion loss. In general the distortion loss is computed in the image space, although, we propose a new efficient distortion loss directly in the latent space:
D = d ( w, T−1 (T (w) + ) ) , (4)
where d is any distortion loss, in this paper we use the mean squared error (MSE) loss. The equation 4 can be seen as a perceptual distortion loss, and it is motivated by the fact that the latent space of GANs is semantically rich. We argue that this is true especially for StyleGAN, where the latent code is extracted from several layers of the StyleGAN encoder which allows to capture multiscale
and semantic features/representations. Compared to existing perceptual losses, our loss does not requires to generate the images during training or to compute heavy losses such as VGG16 or LPIPS in the image space. The total loss used to learn our proposed latent spaceW?c is a trade-off between the rate (equation 3) and distortion (equation 4) as shown below
L = R+ λD (5) Where λ is the trade-off parameter, and the transformation T and the entropy model p are jointly optimized. Once the optimization is completed, to create the bit-stream the latent codes inW?c are quantized with Q using rounding operator.
3.3 VIDEO COMPRESSION (SGANC IC)
Here, we propose our approach for video compression, denoted as SGANC IC, using learned inter coding with residuals. Video compression methods and standards rely on motion estimation and compensation modules to leverage the temporal dependencies between frames. As our approach is formulated in the latent space, our inter coding schema is be based on the successive latent differences, since these differences reflect the temporal changes. We argue that this leads to efficient compression due to the good properties of the latent space of GANs (e.g., linear interpolation). Specifically, inter coding with the residuals is performed in the latent space (Figure 2). Given a sequence of frames {x1,x2, ...,xt−1,xt, ...}, a pretrained encoder E is first used to obtain the latent representation. Then, similarly to intra coding, a transformation T is learned to map these frames toW?IC , leading to a sequence of latent codes {w∗1 ,w∗2 , ...,w∗t−1,w∗t , ...}. The mapping T is learnt such that the sequence of latent codes in W?IC is optimal for inter-coding by taking the temporal dependencies into account. Our approach can be summarized as follows (the complete description of the algorithm can be retrieved in appendix A.2):
• The first latent code is coded using the method described in section 3.2: ŵ∗0 (using the same entropy model or preferably another one trained for image compression). The following steps are repeated until the end of the video;
• The difference between two consecutive latent codes are computed and quantized to obtain v̂t = Q(w ∗ t −w∗t−1).
• From the previous reconstructed code, an estimate of the latent code at frame t is obtained as w̄t = ŵ∗t−1 + v̂t.
• The residual between the estimated and the actual latent code is computed and quantized as r̂t = Q(w∗t − w̄∗t ) (for all the frames or each gap (i.e., g frames).
• The quantized difference v̂t and the residual r̂t are compressed using an entropy coding and sent to the receiver.
• On the receiver side, the current latent code is reconstructed from the residual and the estimated latent code (or only from the estimated latent code each g frames): ŵ∗t = w̄ ∗ t +r̂t. • The latent codes inW?IC are remapped toW+ to generate the images using the pretrained StyleGAN2 G.
To learn the new latent spaceW?IC for inter-coding, the transformation T and the entropy model (p) are learned to optimize the rate-distortion loss:
LIC = d ( wt, T −1 (ŵ∗t))− λ(E[ D∑ i log2 pi (v̂t) ] + E [ D∑ i log2 qi (r̂t) ]) (6)
As similar to section 3.2, we replace the quantization Q by adding uniform noise, and the entropy model is modeled as in (Ballé et al., 2018). While equation 6 has two entropy models, we show below, that it is sufficient to learn only one entropy model on the differences v̂t, as r̂t admits the explicit probability distribution known as Irwin-Hall distribution (the proof of the following Lemma 1 can be found in the appendix A.1).
Lemma 1 Let Q(x) = x + be the continuous relaxation of the quantization, where follows the uniform distribution U[−0.5,0.5] . Let w∗0 ∈ Rn,ŵ∗0 = Q(w∗0), and w̄∗t = ŵ∗t−1 + Q(w∗t −w∗t−1) , t = 1, . . . , n. If r̂t = Q(w∗t − w̄∗t ) is the quantized residual defined for every t such that t ≡ 0 (mod g), then r̂t follows the Irwin-Hall distribution with the parameter 3 + (g−1) and the support being shifted by −0.5 ∗ (3 + (g − 1)).
This leads to optimizing our latent space W?IC for the rate-distortion loss only with one entropy model for v̂t by discarding the last term in (equation 6). During test, the explicit distribution of the residuals can be used for entropy coding.
Stage-Specific entropy models: In Karras et al. (2019), it is shown that each stage/layer of the StyleGAN generator corresponds to a specific scale of details. To leverage this hierarchical structure, we propose stage specific entropy models. Specifically, the first layers, which correspond to coarse resolution (e.g., 42−82) affect mainly high level aspects of the image, such the pose and face shape, while the last layers affect the low level aspects such as textures, colors and small micro structures. Here, we propose to leverage this hierarchical structure and weight the distortion λ differently for each layer of the generator (Note that, the latent code in W+ or W?IC consists of 18 latent codes of dimension 512 and each one corresponds to one layer in the generator, hence its dimension is (18, 512)). For practical reasons, we split the 18 layers in three stages (1− 8, 8− 13, 13− 18), and learn one transformation and entropy model for each group. The distortion λ is chosen to be higher for the first layers and decreases subsequently.
4 EXPERIMENTS
4.1 DATASETS
Celeba-HQ (Karras et al., 2018), a dataset consists of 30000 high quality face images of 1024×1024 resolution.
FILMPAC, a dataset consists of 5 video clips with high resolution and the length varies between 60 and 260 frames. These videos can be found on the filmpac website1 by searching their names (FP006734MD02, FP006940MD02, FP009971HD03, FP010363HD03, FP010263HD03).
MEAD (Wang et al., 2020), a high resolution talking face video corpus for many actors with different emotions and poses. The training dataset for inter-coding consists of 2.5 k videos with frontal poses. For evaluation we created MEAD-inter dataset consisting of 10 videos (selected from MEAD) of different actors with frontal pose. We also created MEAD-intra dataset consisting of 200 frames selected from these videos with frontal pose for evaluating image compression methods.
Dataset preprocessing: All the frames are cropped and aligned using the same prepossessing method as that of FFHQ dataset (Karras et al., 2019), on which the StyleGAN is pretrained. As we compare the reconstructed image with the projected one for SGANC, we project all the frames (encode the original images and reconstruct them using StyleGAN2). All frames are of high resolution (1024× 1024).
4.2 IMPLEMENTATION DETAILS
We used StyleGAN2 generator (G) (Karras et al., 2020) pretrained on FFHQ dataset (Karras et al., 2019). The images are encoded in W+ using a pretrained StyleGAN2 encoder (E) (Richardson et al., 2021). The parameters of the generator and the encoder remain fixed in all the experiments. The latent vector dimension inW+ andW?c is 18× 512. The mapping function T is modeled using RealNVP architecture (Dinh et al., 2017) without batch normalization. It consists of 13 coupling layers, and each coupling layer consists of 3 fully connected (FC) layers for the translation function and
1https://filmpac.com/
3 FC for the scale one with LeakyReLU as hidden activation and Tanh as output one (Total number of trainable parameters=20.5 M). For the entropy model, we have used the fully factorized entropy model (Ballé et al., 2018) based on the implementation from the CompressAI library (Bégaint et al., 2020). For the stage specific entropy model, λ is kept constant for the first stage and then decreased linearly to be 1e−2 smaller for the last layer. For all the experiments, we used Adam optimizer with β1 = 0.9 and β2 = 0.999, learning rate=1e−4 and the batch size=8. Once the training is completed, we used Range Asymmetric Numeral System coder (Duda, 2013) to obtain the bit-stream.
Training: We encoded all the images once, and the training is performed solely using the latent codes. Thus, we do not need the generator nor the encoder during training, which makes the approach light and fast to train. For SGANC, We train on Celeba-HQ dataset. For SGANC IC, we trained on 2.5 k videos from the MEAD dataset (Wang et al., 2020), where each batch contains video slices of size = 9 frames. All the frames are preprocessed as in section 4.1.
4.3 RESULTS
In this section we present the experimental results of our proposed method for image and video compression. We used the following metrics to assess the quality of the compression methods: Peak Signal to Noise Ratio (PSNR), Multi Scale Structural Similarity (MS-SSIM) (Wang et al., 2003), Learned Perceptual Image Patch Similarity (LPIPS) (Zhang et al., 2018), and the Perceptual Information Metric (PIM) with a mixture of 5 Gaussians (Bhardwaj et al., 2020).
LPIPS and PIM are two perceptual metrics; PIM is recently proposed and follow an informationtheoretic approach, inspired from the human perceptual system (Bhardwaj et al., 2020). The authors of LPIPS, on the other hand, show that it is consistent with the human perception. We report the size of the compressed images in bits per pixel (BPP), which is common in evaluating image and video compression.
To exclude the distortion coming from the GAN inversion technique, all the frames are projected (i.e., inverted) using the encoder. The projected frames are used for all approaches in the main paper as a baseline to compute the distortion metrics. For the sake of completion, we also present evaluation results with the original images in the appendix A.4.
4.3.1 IMAGE COMPRESSION
We compare our method with the most recent state-of-the-art codecs such as VTM (VTM, 2020), AV1 (AV1, 2018) and deep compression models such as scale Hyper Prior (HP) (Ballé et al., 2018), factorized entropy model with scale and mean hyperpriors (MeanHP) (Minnen et al., 2018) and the anchor variant of Cheng et al. (2020). MeanHP and (Cheng et al., 2020) are trained with the MSSSIM objective and HP with the MSE (we found that this leads to better results) on the Celeba-HQ dataset. All the methods are evaluated on the independent evaluation dataset: MEAD-Intra and FILMPAC.
Quantitative results: We present the rate distortion curves for different BPP on the MEAD-intra dataset in Figure 3. As can be observed, for the perceptual metrics LPIPS and PIM, our method significantly outperforms the state-of-the-art methods by a large margin. Deep learning based methods are better than traditional codecs for high BPP but they are inferior for lower BPP. This observation highlights the potential of our method to obtain reconstruction with good perceptual quality. For classical metrics, our method is the best at medium and high BPP for MS-SSIM, and has the highest PSNR at high BPP. It is noted that the deep learning based competitors trained with MS-SSIM objective, are still not able to outperform the traditional codecs VTM and AV1. The above observations also holds on the evaluation of the FILMPAC dataset, please refer to figure 17 in the appendix.
Qualitative results: Next we discuss the qualitative results, shown in figure 4. The visual inspection reveals that at a comparable BPP, AV1 introduces blocking artifacts while VTM generates blurry results. The reconstruction of MeanHP is not sharp enough and present distortion in color. Our method delivers perfect reconstruction compared to the projected image and preserves all the details. For example, the details of the eye brows are preserved where as the existing methods failed. Additional images are displayed in appendix A.4.
4.3.2 VIDEO COMPRESSION
We compare our method (SGANC IC) with the most recent and performing two state-of-the-art methods: Versatile Video Coding Test Model (VTM) (VTM, 2020) and H.265 standard (Sullivan et al., 2012a). We used the official implementation for both VTM (Random access with GOP=16) and H.265 (FFmpeg Library). Each method is evaluated on the MEAD-inter dataset.
Quantitative results: Figure 6 presents the quantitative evaluation curves on the MEAD-inter dataset. The computed metrics are first averaged across the frames in each video, and then averaged across all the videos. As similar to the observations in section 4.3.1, our proposed method for video compression achieved impressive performance and significantly outperformed the state-of-the-art methods with respect to perceptual metrics. Especially, with the perceptual distortion metric PIM, our method is significantly better. In terms of PSNR, our method is comparable with H.265 at low bit-rate and achieved better performance at high BPP with both methods.
Qualitative results: Figure 5 compares the reconstruction quality of all the methods with a comparable BPP on a single frame extracted from the compressed video. Our method is almost artifacts free, photorealistic and perceptually more pleasant, while VTM leads to blurry results and H.265 exhibits blocking artifacts. For more visual results please refer to the appendix A.4. Note that, despite the high quality of our method, the reconstruction of StyleGAN (projected vs original) is still not perfect. Once this factor is eliminated, the distortion coming from the quantization (projected vs our method) is negligible which makes our method very promising taking into account of the exponential improvement of GAN generation and inversion.
The ablation studies related to the different distortion losses, the alternative choices of transformation T and other design choices are discussed in the appendix A.3 due to the space limitation.
5 CONCLUSION
We have proposed in this paper a new paradigm for facial image and video compression based on GANs. Our framework is efficient to train and leads to perceptually competitive results compared to the most efficient state-of-the-art compression systems. We believe that at low bitrates, our solution leads to a different and more acceptable type of distortion, since the reconstructed image is very sharp and photorealistic. Our approach is not restricted for faces since GANs have been proposed for various natural objects. For a specific category of object for which a GAN has not been trained already, our approach could be used after training the specific GAN as well as the compression bottelneck. The main limitation of our method is the approximation of any input image using StyleGAN. We believe that the continuous and impressive improvements of GANs inversion and generation will limit this limitation in the near future.
A APPENDIX
The Appendix is organized as follows: in section A.1, we prove that the distribution of residuals follows a known Irwing-Hall distribution, in section A.2 we detail the algorithms for training and testing of the inter coding with residual approach. In section A.3 we detail the ablation study for image and video compression. Finally, in section A.4 we provide more results.
A.1 EXPLICIT DISTRIBUTION OF THE RESIDUALS: PROOF OF LEMMA 1
Let Q(x) = x + be the continuous relaxation of the quantization, and follows the uniform distribution U[−0.5,0.5]. Let the t be the frame index ranging from {0, 1, . . . ,K − 1}.
Let w∗0 ∈ Rn and ŵ∗0 = Q(w∗0). Let us define the following v̂t = Q(w ∗ t −w∗t−1) = w∗t −w∗t−1 + , (7)
w̄t = ŵ ∗ t−1 + v̂t,
Similarly, we define
r̂t = { Q(w∗t − w̄∗t ), ∀ t ≡ 0 (mod g) 0 else
(8)
Now we prove that r̂t follows the Irwin-Hall or the uniform sum distribution (Johnson et al., 1995). we have:
r̂t = Q(w ∗ t − w̄∗t ) = w∗t − w̄∗t + 1 = w∗t − ( ŵ∗t−1 + v̂t ) + 1
= w∗t − ( ŵ∗t−1 +w ∗ t −w∗t−1 + 1 ) + 2 = w∗t − ŵ∗t−1 −w∗t +w∗t−1 − 2 + 1 = w∗t−1 − ŵ∗t−1 − 2 + 1 (9)
For t ≡ 0 (mod g), the reconstructed latent code is computed from the residual, and can be written as:
ŵ∗t = r̂t + w̄ ∗ t
= Q(w∗t − w̄∗t ) + w̄∗t = w∗t − w̄∗t + + w̄∗t = w∗t + (10)
Otherwise, it is the estimated latent code w̄∗t , thus ŵ ∗ t can be written as:
ŵ∗t = { w∗t + , ∀ t ≡ 0 (mod g) w̄∗t else
(11)
For g = 1, equation 9 becomes: r̂t = w ∗ t−1 − ŵ∗t−1 − 2 + 1
= w∗t−1 −w∗t−1 + 3 − 2 + 1 = 3 − 2 + 1 (12)
For g > 1, Let us define the following quantity:
m̂t = { r̂t, ∀t ≡ 0 (mod g) Q(w∗t − w̄∗t ) else
(13)
Using equation 11 and equation 13: r̂t = w ∗ t−1 − ŵ∗t−1 − 2 + 1
= w∗t−1 − w̄∗t−1 − 2 + 1 = m̂t−1 − 2 = ... = m̂t−(g−1) − g−1∑ i=1 i (14)
Similarly to equation 9, we can replace m̂t−(g−1):
r̂t = w ∗ t−g − ŵ∗t−g + 1 − g−1∑ i=1 i (15)
As t− g ≡ 0 (mod g) (as the residual is used only each t ≡ 0 (mod g)), from equation 11, we can replace ŵ∗t−g by its relaxed approximation:
r̂t = w ∗ t−g − ( w∗t−g + 2 ) + 1 − g−1∑ i=1 i
= 1 − 2 + 3 − g−1∑ i=1 i (16)
Which is the sum of 3 + (g − 1) independent random variables following the uniform distribution. We showed r̂t follows the uniform sum distribution, which is the Irwin-Hall distribution IH(x;n) with parameter n = 3 + (g− 1) and x ∈ [−n× 0.5, n× 0.5]. Since the is the uniform distribution U[−0.5,0.5], in our case the support of the Irwin-Hall distribution is shifted by −n× 0.5.
A.2 INTER CODING (SGANC IC)
In this section we present the algorithms for video compression using inter coding with residual during train and test:
Algorithm .1: SGANC IC: during test Result: Compressed frames sequence: {x̂1, x̂2, ..., x̂t−1, x̂t, ..., x̂N}
1 Initialization: frames sequence: {x1,x2, ...,xt−1,xt, ...,xN}, N=number of frames, Encoder (E), Generator (G), Transformation T , entropy coder (EC) and decoder (ED), quantizer (Q), g for residual coding; 2 ŵ∗0 = ED(EC(Q(T (E(x0)))) ; // Intra Coding of the first frame 3 t = 1; 4 while t < N do 5 w∗t = T (E(xt)) ; // Encode the frames and map them to W?c 6 w∗t−1 = T (E(xt−1)); 7 v̂t = ED(EC(Q(w ∗ t −w∗t−1))) ; //Quantize, Compress and Decompress (Receiver) the differences 8 w̄∗t = ŵ ∗ t−1 + v̂t ; // Compute an estimate of the latent code
9 if t%g == 0 then 10 r̂t = ED(EC(Q(w ∗ t − w̄∗t ))) ; // Quantize, Compress and Decompress (Receiver) the residual 11 ŵ∗t = w̄ ∗ t + r̂t ; // Reconstruct the latent code 12 else 13 ŵ∗t = w̄ ∗ t ; 14 end 15 x̂t = G(T−1(ŵ∗t )) ; // Reconstruct the image 16 t = t+ 1; 17 end
Algorithm .2: SGANC IC: during training Result: Transformation (T ), Entropy model (p)
1 Initialization: video dataset encoded as latent codes, where each video= {w1,w2, ...,wt−1,wt, ...}, number of frames in each batch (N ), dataset size (S), Encoder (E), Generator (G); 2 while i < S do 3 t, L = 1, 0; 4 while t < N do 5 w∗t , w ∗ t−1 = T (wt), T (wt−1) ; // Map the latent codes to W?c 6 v̂t = Q(w ∗ t −w∗t−1) ; // Quantize (adding noise) the differences 7 w̄∗t = ŵ ∗ t−1 + v̂t ; // Compute an estimate of the latent code 8 ŵ∗t = w̄ ∗ t ;
9 L = L+ L ; // Compute the loss L (equation 17) 10 t = t+ 1; 11 end 12 Update the parameters of T and p to minimize L; 13 i = i+ 1; 14 end
A.3 ABLATION STUDY
In this section we detail the ablation study for image and video compression.
A.3.1 IMAGE COMPRESSION (DISTORTION LOSS)
In this section we compare different choices of the distortion loss. Specifically, we compare the MSE loss in the image space (Img. D), the MSE loss in the latent spaceW+ (Lat. D), the combination of both (Lat.-Img. D), the LPIPS loss in the image space (LPIPS D) and the combination of the MSE and the LPIPS losses in the image space (LPIPS-Img. D). The implementation details are the same as in section 4.2 except for the choice of the distortion loss and we use a batch size of 4 when training with LPIPS. We compare on both MEAD intra and FILMPAC dataset.
Results: From Figures 7, 8, we can notice that our loss (Lat. D) outperforms the the MSE and the LPIPS distortion in the image space. Moreover, the loss in the image space produces some artifacts and images are blurred while the loss in the latent space is almost artifacts free. There is no benefit of using the loss in the image space in addition to ours. Note that, the training using only the latent loss is faster than others (e.g., the training took; 5 days (LPIPS.-Img. D), 4 days (Lat.-Img. D) and 6 hours (Lat. D)) and occupies smaller space in GPU (e.g., 24 GB with batch size=4 for LPIPS.-Img. D, 10 GB with batch size =4 for Img. D and Lat.-Img. D and less than 2 GB with batch size =8 for Lat. D).
A.3.2 IMAGE COMPRESSION (TRANSFORMATION T )
In this section, we investigate the importance of using a bijective transformation (i.e., Normalizing Flows). To this end, we replace the Real NVP model with an autoencoder (AE) and retrain it with the entropy model on Celeba-HQ with the same implementation details as in section 4.2. The AE consists of 7 layers with the same dimension (i.e., 512) for the encoder and 7 layers with the same dimension for the decoder with ReLU activation functions.
Results: From Figure 9, we can notice that parametrizing the transformation T as Normalizing Flows (i.e., Real NVP) leads to better results. For instance, some facial attributes are changed when using an AE (such as the age and the skin color), as well as the person’s identity.
A.3.3 VIDEO COMPRESSION (INTER CODING: EXPLICIT SPARSIFICATION)
Having only few dimensions that change between two consecutive latent codes is efficient for entropy coding. Thus, we have investigated how to explicitly sparsify these differences and add an L1 regularization on the latent codes differences during training. The loss becomes:
LIC−L1 = LIC + λL1 |w∗t−1 −w∗t | (17)
Note that, as the transformation T and the entropy model are trained jointly, we expect that the latent codes inW?c to be transformed to fit efficiently the entropy model. In the following, we assess whether explicit sparsification brings additional improvement (Section A.3.4).
A.3.4 VIDEO COMPRESSION (INTER CODING)
In this section we present the ablation study for SGANC IC, in which we investigate the effect of the following parameters; parameter g for the residual coding (g), L1 regularization equation 17, residual coding (res) vs intra coding (intra) each g, stage specific entropy model; using different entropy models and distortion λ for the different stages of StyleGAN2 (SS). We thus compare the following variants:
• SGANC IC intra g10: replacing the residual coding at each g = 10 by intra coding using one of the image processing models that we train in section 3.2.
• SGANC IC res g10: doing residual coding each g = 10. • SGANC IC res g80: doing residual coding each g = 80. • SGANC IC res g10 L1: Adding the L1 regularization equation 17 during training and doing
residual coding each g = 10. • SGANC res g10 SS: Using 3 entropy models and 3 T models for each stage of the Style-
GAN2 (1-7, 7-13, 13-18). In addition, training with layer specific distortion lambda; λ = wλ (where w = 1 for the first stage and decrease from 1 to 0.01 for the second and third stage.
• SGANC res g10 SS L1: residual coding each g = 10, stage specific entropy models and L1 regularization.
• SGANC res g2 SS: residual coding each g = 2, stage specific entropy. • SGANC res g1 SS L1: Same as before but doing a residual coding each frame (g = 1).
From Figure 10, we can notice the following:
• Decreasing g leads to better results, at the expense of increased BPP especially for g = 1. • Significant improvements is obtained by performing residual coding (SGANC IC intra g10
vs SGANC IC res g10) and using stage specific entropy models (SGANC IC res g10 vs SGANC IC res g10 SS).
• Only a marginal improvement was obtained by adding the L1 regularization during training (SGANC res g10 vs SGANC res g10 L1). In addition, the improvement becomes negligible when using SS entropy models (SGANC res g10 SS vs SGANC res g10 SS L1).
A.4 MORE RESULTS
In this section we show more quantitative and qualitative results for image and video compression.
A.4.1 IMAGE COMPRESSION
In this section we show the results for image compression (Figures 11, 12, 15, 16, 13, 14). Contrary to the results presented in the main paper, we here present for sake of completeness the results by comparing to the original image (except for our method).
A.4.2 VIDEO COMPRESSION
In this section, we show quantitative and qualitative results for video compression. Contrary to our main results in the paper, we here compress the original video and compute the metrics with original frames (except for our approach still compared to the projected frames).
From Figures 18, 20, 21, 22 and 23, we can notice that VTM and H.265 introduces blocking and blurring artifacts which is not case for our approach SGANC IC. We can also notice similar observations in Figures 24, 25, 26 and 27. From Figure 19, we can notice that our methods are better perceptually (LPIPS and MS-SSIM) than VTM and H.265. | 1. What is the main contribution of the paper regarding talking-head video compression?
2. What are the strengths of the proposed approach, particularly in its novelty and technical soundness?
3. What are the weaknesses of the paper, such as missing citations, technical contradictions, and assumptions regarding perfect reconstruction?
4. How does the reviewer assess the clarity and quality of the paper's content?
5. What additional comparisons and experiments should be conducted to improve the paper's validity and impact? | Summary Of The Paper
Review | Summary Of The Paper
This paper proposes a deep learning-based method for compressing talking-head videos to very low bit rates, while maintaining perceptual quality. The main idea of the paper is to use a pre-trained StyleGAN network along with a pre-trained encoder to encode facial images into StyleGAN's w+ space. The authors then propose to learn an optimal transformation of W+ space into a diffeomorphism compression space, which enables entropy and distortion optimization coding via normalizing flows. The authors also propose a new distortion preserving loss in the latent space, which is easier to implement than the image-based pixel-wise reconstruction or perceptual losses that are used as the de-facto standard. The authors further propose methods for both intra and inter-frame coding. They evaluate their proposed method on two datasets and show SOTA performance in comparison to the recent classical and deep-learning-based video compression algorithm, especially for very low bit rate encoding.
Review
Strengths:
Novelty: The paper addresses a very relevant problem, which is pertinent to our times. There has been a huge rise in the demand for video conferencing since the pandemic. However, this isn't the first work to address this problem. It was was previously addressed in Wang et al, One-Shot Free-View Neural Talking-Head Synthesis for Video Conferencing, CVPR 2021. Nevertheless, there is little work in this general research area and this work proposes a novel approach to solving the problem in comparison to the existing one and hence is sufficiently novel, overall.
Technical Soundness: The method is mostly technically sound and the authors have provided many experiments. A few exceptions are noted below in the weaknesses section.
Clarity: The paper is most clear and well written. Minor language-related typos exist, which can be easily fixed with a thorough editorial review.
Weaknesses:
Missing citations: The authors failed to cite the work: Wang et al, One-Shot Free-View Neural Talking-Head Synthesis for Video Conferencing, CVPR 2021, which addresses the same problem as the authors'. Nevertheless, the authors' approach to solving the problem is significantly different from that of Wang et al.'s also simpler and easier to implement using off-the-shelf pre-trained networks. Also the authors of this work consider both intra and inter coding explicitly, which is novel. The authors should cite Wang et al's work along with any other pertinent previous citations from it as well, and thoroughly explain the differences of their approach from Wang et al.'s (and the proceeding lines of work).
Technical Correctness: The authors propose to employ normalizing flows to ensure diffeomorphic mappings between the W+ and W_c spaces, but to get around the issue of making the quantization operation differentiable they approximate it with adding uniform noise while training. However, adding this noise breaks the diffeomorphic assumption and according to the distortion loss defined in equation 4, different values from the W_c are enforced to map to the same value in the W+ space. How to the authors explain this technical contradiction of their proposed solution?
Perfect Reconstruction Assumption: The authors, by their own admission reply on the pre-trained encoder to be able to produce perfect reconstructions of input image. This is a drawback of current approach. To further quantify the effect of this assumption on image quality, the authors should provide quantitative results of comparing to the input image as well, besides comparisons against the reconstructed image for both their and the baseline video encoding method. It is not fair on the author's part to only report the results of comparisons against the reconstructed image only.
Lack of Comparisons to SOTA: The authors lack quantitative and quantitative comparisons to the SOTA approach of Wang et al., CVPR 2021. They should provide qualitative and quantitative comparisons against this approach. Furthermore, note that Wang et al, compare against the input frame using LPIPS and hence the authors of this work should similarly compare their encoded frames against the input frames and not just the reconstructed frames, while comparing against Wang et al's approach. Furthermore, I would also like to see how well their proposed approach preserves the identity of the subject in the encoded video versus Wang et al. |
ICLR | Title
Learning Perceptual Compression of Facial Video
Abstract
We propose in this paper a new paradigm for facial video compression. We leverage the generative capacity of GANs such as StyleGAN to represent and compress each video frame (intra compression), as well as the successive differences between frames (inter compression). Each frame is inverted in the latent space of StyleGAN, where the optimal compression is learned. To do so, a diffeomorphic latent representation is learned using a normalizing flows model, where an entropy model can be optimized for image coding. In addition, we propose a new perceptual loss that is more efficient than other counterparts (LPIPS, VGG16). Finally, an entropy model for video inter coding with residual is also learned in the previously constructed latent space. Our method (SGANC) is simple, faster to train, and achieves better results for image and video coding compared to state-of-the-art codecs such as VTM, AV1, and recent deep learning techniques.
1 INTRODUCTION
With the explosion of videoconferencing, the efficient transmission of facial video is a key industrial problem. Image compression can be formulated as an optimization problem with the objective of finding a codec which reduces the bit-stream size for a given distortion level between the reconstructed image at the receiver side and the original one. The distortion mainly occurs due to the quantization in the codec, because the entropy coding method (Rissanen & Langdon, 1979; Gray, 2011) requires the discrete data to create the bit-stream. The compression quality of the codec depends on the modeling of the data distribution close to the real data distribution since the expected optimal code length is lower bounded by the entropy (Shannon, 1948). Existing compression methods suffer from various artefacts. Especially at very low bits-per-pixel (BPP), blocks and blur degrade the image quality, leading to a poorly photorealistic image.
Motivated by the appealing properties of the StyleGAN architecture for high quality image generation, we propose a new compression method for facial images and videos. The intuition is that the GAN latent representation associated to any face image is somehow disentangled, and a perceptual compression method should be easier to train using the latent, especially for inter-coding. In addition, at extremely low BPP, a compressed latent code should always lead to a photorealistic image, hence leading to a compression technique that is perceptually more pleasant. However, we acknowledge that the method relies on a strong hypothesis that a GAN inversion technique can approximate accurately any given image. While this hypothesis is currently strong, we note that the performances of GAN inversion and generation methods have significantly improved recently. We take the leap of faith that the hypothesis will be valid in the near future.
For real natural images, a StyleGAN encoder (GANs inversion) (Richardson et al., 2021; Wei et al., 2021) have been proposed to project any image onto the latent space of StyleGAN. Our objective is to compress the latent representations. Although, retraining the encoder and the generator for different BPP is computationally heavy and costly, especially, using a heavy perceptual distortion losses in the image space (i.e., VGG16, LPIPS). In order to overcome the above challenges, we propose to learn a new proxy latent space representation using a diffeomorphic mapping function. This new latent representation is precisely learned to optimize the compression efficiency. The approach is illustrated in Figure 1. The advantages of such a space are two-fold; (a) it makes the approach very efficient as we can use off the shelf pretrained StyleGAN encoder and generator without the need to retrain them for each quality level. (b) it allows to learn a space dedicated for
compression in which an optimal entropy model can be learned. Finally, we extend this approach to videos and propose a method to optimize for the inter-coding with residuals in the new latent representation, as illustrated in Figure 2.
Our contributions can be summarized as follows:
• We propose a new paradigm for facial video compression, leveraging the generative power of StyleGAN for artifact-free and high quality image compression.
• Without any GAN retraining, we propose to learn a proxy latent representation that effectively fits the entropy model, while using off the shelf pretrained StyleGAN encoder/decoder models. We propose to learn a model for the intra image coding and the inter video coding.
• We propose a new perceptual distortion loss that is more efficient to compute and leverages the multiscale and semantic representation in the latent space of StyleGAN.
• We show high quality and lower perceptually-distorted reconstructed images for low BPP compared to traditional and deep learning based methods for image compression. We show better qualitative and quantitative results compared to the most recent state-of-the-art methods H.265 and VTM for video compression.
The rest of the paper is organised as follows: section 2 presents related works on image and video compression. Our proposed method is detailed in section 3 and section 4 presents the experimental results.
2 RELATED WORK
Image Compression: Traditional image codecs such as JPEG (Wallace, 1992), JPEG2000 (Rabbani, 2002) are carefully human-engineered to achieve very effective performance. However, they are not data-dependent and limited to linear transformations. Recently, deep learning based codecs (Ballé et al., 2017; 2016; Agustsson et al., 2019; Mentzer et al., 2020) have gained significant attraction in the community due to their superior performance and ability to learn the data dependent non-linear transformations. These approaches learn nonlinear transformations of the input data and an entropy model jointly with the objective of optimal trade-off between efficient compression and reconstruction quality. Specifically, they minimize the following rate-distortion loss:
L = −E[log2Pz] + λE[d(x, x̂)], (1)
where x, x̂ are the original and the reconstructed images, z is the corresponding latent representation, Pz(·) is the entropy model of the latent distribution and d(·, ·) is a distortion loss. To achieve optimal rate-distortion trade-off, several methods use variational autoenconder (VAE) type architecture (Ballé et al., 2016; Minnen et al., 2018; Cheng et al., 2020) and achieve impressive performance at higher BPP, however they are sub-optimal at low BPP. Some papers have targeted the entropy model: Ballé et al. (2018) propose to use a hyper prior as a source of side information to capture the spatial dependencies. Minnen et al. (2018) follow a similar approach and augment the hierarchical model with an autoregressive one. Cheng et al. (2020) build the previous approach with a discretized Gaussian mixture entropy model with attention modules.
Other papers have improved the model structure and the architecture (Li et al., 2018; Cheng et al., 2018; 2019) and some used recurrent neural networks (Toderici et al., 2017; Johnston et al., 2018). Others, have leveraged the power of generative adversarial networks (GAN) (Goodfellow et al., 2014) and used adversarial losses, especially for low bitrates (Rippel & Bourdev, 2017; Agustsson et al., 2019). These approaches produce high quality and lower perceptually-distorted image reconstructions (Santurkar et al., 2018; Mentzer et al., 2020), even for high bitrates (Tschannen et al., 2018). These methods are computationally heavy as they require adversarial training for each quality level. We argue that this is not practical especially for data compression, where each quality level requires to train a new model from scratch.
The choice of distortion loss is crucial for better reconstruction, traditionally PSNR or MS-SSIM (Wang et al., 2003) are used. These metrics capture the pixel wise distortion and poorly capture the perceptual distortion. Moreover, Blau & Michaeli (2018) shows that there is a trade-off between pixel wise distortion and perceptual quality. This observation is seen clearly for low BPP, where traditional codecs favor blocking artifacts and deep compression systems show blurred and other types of artifacts.
Several works have been tried to remedy these limitations; motivated by the success of using perceptual losses such as LPIPS (Zhang et al., 2018) or VGG16 (Johnson et al., 2016) in other applications (Ledig et al., 2017; Dosovitskiy & Brox, 2016; Gatys et al., 2016), some papers (Santurkar et al., 2018; Chen et al., 2020) propose to include a perceptual distortion loss in addition to the pixel wise ones. These perceptual distortion losses are based on computationally expensive networks such as VGG16 or learned perceptual metrics such as LPIPS. Moreover, the backbone networks are pretrained for an unrelated and discriminative task, and on different datasets, such as Image classification on ImageNet. To remedy this, we propose an alternative and more efficient perceptual distortion loss in the latent space.
Video Compression: Deep Video Compression systems also minimize the rate-distortion loss equation 1. In addition to the spatial redundancy (SR), temporal redundancy (TR) is reduced by incorporating motion estimation modules. Traditional methods (Wiegand et al., 2003; Sullivan et al., 2012b) rely on handcrafted and block based modules. Recently, deep learning based video compression systems proposed to replace the traditional modules by learned ones (Lu et al., 2019). Motion estimation (Dosovitskiy et al., 2015) and compensation are often used to address TR. Several improvements have been made to reduce TR; Lin et al. (2020) leverage multiple frames to improve the motion compensation, and Hu et al. (2020) use multi resolution flow maps to effectively compress locally and globally. However, training motion estimation modules is difficult since they require large annotated data. Some approaches perform TR reduction in latent space (Feng et al., 2020; Djelouah et al., 2019; Hu et al., 2021). Others propose to do frames interpolation (Djelouah et al., 2019), or an interpolation in the latent space of GANs (Santurkar et al., 2018).
3 LEARNING PROXY REPRESENTATION FOR COMPRESSION
In this section, we describe our proposed method to learn the latent space dedicated for compression. The schematic diagram is illustrated in Figures 1 and 2. The method can be summarized as follows: an input facial video frame is first projected in the latent space of StyleGAN W+ . Our method consists in learning a transformation T from the original latent space W+ to a new proxy representation denoted asW?c . The transformation T is defined as a diffeomorphic mapping (i.e., normalizing flow) and is learnt so that the intra image compression and the inter video compression
are optimal inW?c . It is noted that learning a proxy representation inW?c for compression is also motivated by the fact that the rate distortion loss can not be optimized in the original space W+ without training the encoder/generator. Below, we first briefly describe some background material (section 3.1), and then present our methods for images or intra compression (section 3.2) and for videos or inter compression (section 3.3).
3.1 BACKGROUND
StyleGAN Generator StyleGAN (Karras et al., 2019) is the state of the art unconditional GAN for high quality realistic image generation. It consists of a mapping function that takes a vector sampled from a normal distribution (z ∼ N (0, I)) and maps it to an intermediate latent space W using a fully connected network (M ) before feeding it to multiple stages (i.e., W+ space) of the generator (G) to generate an image (x) from the distribution of the real images (x ∼ px):
x = G(w) w = M(z) z ∼ N (0, I) (2)
It is shown that the latent space of GAN is semantically rich and enjoys several properties such as semantic interpolation (Radford et al., 2016). In addition, the latent vector encoded in W+ space of StyleGAN captures a hierarchical representation of the projected image. In our case, we use StyleGAN2 (Karras et al., 2020) which is an improved version.
StyleGAN Encoder The StyleGAN encoder (Richardson et al., 2021) is a deterministic function denoted as E. Its role is to project a real image into the latent space of StyleGAN (e.g., W or W+), in such a way that the reconstructed image by the StyleGAN generator is minimally distorted (x̂ = G(E(x)) ≈ x). In our case, the image is projected in W+ with dimension of (18 × 512) where each dimension controls a different convolution layer of the StyleGAN generator. Currently the encoding based GAN inversion approaches are not ideal, which explains the difference between the projected and the original image.
3.2 IMAGE COMPRESSION (SGANC)
We assume that we have a pretrained and fixed StyleGAN2 generator G that considers a latent code w ∈ W+ and generates a high resolution image of size 1024× 1024, and an encoder E (pretrained and fixed) that embeds any given image x to the latent codew inW+ such that G(E(x)) ' x. Our objective is to learn a new latent space W?c optimal for image compression. The W?c is obtained using a bijective transformation T : W+ → W?c , which is parametrized as a normalizing flows model (note that our work only requires the bijectivity, as such, no maximum-likelihood term is included in the training objective). T maps a latent code w ∈ W+ into w?c ∈ W?c such that the latent vectors w?c has the minimum entropy and the sufficient information to generate the original image with minimal distortion from the inverted latent code T−1(w?c ) using G. The entropy model is trained on the latent codesw?c ∈ W?c by minimizing the following rate loss after the quantization:
R = −E [ D∑ i log2 pi (Q (T (w))) ] = −E [ D∑ i log2 pi (T (w) + ) ] (3)
For equation 3 to be differentiable, following (Ballé et al., 2017), we relax the hard quantization Q by adding uniform noise to the latent vectors T (w). pi is the ith dimension of the probability density function in W?c , D is the latent vector dimension, w = E(x) where x is the input image and is sampled from a uniform distribution U[−0.5,0.5]. The entropy model p is modeled as the fully factorized entropy model and it is also parameterized by a neural network as in (Ballé et al., 2018). The transformed/quantized latent codes should also have sufficient information to reconstruct the image, and this is achieved by minimizing a distortion loss. In general the distortion loss is computed in the image space, although, we propose a new efficient distortion loss directly in the latent space:
D = d ( w, T−1 (T (w) + ) ) , (4)
where d is any distortion loss, in this paper we use the mean squared error (MSE) loss. The equation 4 can be seen as a perceptual distortion loss, and it is motivated by the fact that the latent space of GANs is semantically rich. We argue that this is true especially for StyleGAN, where the latent code is extracted from several layers of the StyleGAN encoder which allows to capture multiscale
and semantic features/representations. Compared to existing perceptual losses, our loss does not requires to generate the images during training or to compute heavy losses such as VGG16 or LPIPS in the image space. The total loss used to learn our proposed latent spaceW?c is a trade-off between the rate (equation 3) and distortion (equation 4) as shown below
L = R+ λD (5) Where λ is the trade-off parameter, and the transformation T and the entropy model p are jointly optimized. Once the optimization is completed, to create the bit-stream the latent codes inW?c are quantized with Q using rounding operator.
3.3 VIDEO COMPRESSION (SGANC IC)
Here, we propose our approach for video compression, denoted as SGANC IC, using learned inter coding with residuals. Video compression methods and standards rely on motion estimation and compensation modules to leverage the temporal dependencies between frames. As our approach is formulated in the latent space, our inter coding schema is be based on the successive latent differences, since these differences reflect the temporal changes. We argue that this leads to efficient compression due to the good properties of the latent space of GANs (e.g., linear interpolation). Specifically, inter coding with the residuals is performed in the latent space (Figure 2). Given a sequence of frames {x1,x2, ...,xt−1,xt, ...}, a pretrained encoder E is first used to obtain the latent representation. Then, similarly to intra coding, a transformation T is learned to map these frames toW?IC , leading to a sequence of latent codes {w∗1 ,w∗2 , ...,w∗t−1,w∗t , ...}. The mapping T is learnt such that the sequence of latent codes in W?IC is optimal for inter-coding by taking the temporal dependencies into account. Our approach can be summarized as follows (the complete description of the algorithm can be retrieved in appendix A.2):
• The first latent code is coded using the method described in section 3.2: ŵ∗0 (using the same entropy model or preferably another one trained for image compression). The following steps are repeated until the end of the video;
• The difference between two consecutive latent codes are computed and quantized to obtain v̂t = Q(w ∗ t −w∗t−1).
• From the previous reconstructed code, an estimate of the latent code at frame t is obtained as w̄t = ŵ∗t−1 + v̂t.
• The residual between the estimated and the actual latent code is computed and quantized as r̂t = Q(w∗t − w̄∗t ) (for all the frames or each gap (i.e., g frames).
• The quantized difference v̂t and the residual r̂t are compressed using an entropy coding and sent to the receiver.
• On the receiver side, the current latent code is reconstructed from the residual and the estimated latent code (or only from the estimated latent code each g frames): ŵ∗t = w̄ ∗ t +r̂t. • The latent codes inW?IC are remapped toW+ to generate the images using the pretrained StyleGAN2 G.
To learn the new latent spaceW?IC for inter-coding, the transformation T and the entropy model (p) are learned to optimize the rate-distortion loss:
LIC = d ( wt, T −1 (ŵ∗t))− λ(E[ D∑ i log2 pi (v̂t) ] + E [ D∑ i log2 qi (r̂t) ]) (6)
As similar to section 3.2, we replace the quantization Q by adding uniform noise, and the entropy model is modeled as in (Ballé et al., 2018). While equation 6 has two entropy models, we show below, that it is sufficient to learn only one entropy model on the differences v̂t, as r̂t admits the explicit probability distribution known as Irwin-Hall distribution (the proof of the following Lemma 1 can be found in the appendix A.1).
Lemma 1 Let Q(x) = x + be the continuous relaxation of the quantization, where follows the uniform distribution U[−0.5,0.5] . Let w∗0 ∈ Rn,ŵ∗0 = Q(w∗0), and w̄∗t = ŵ∗t−1 + Q(w∗t −w∗t−1) , t = 1, . . . , n. If r̂t = Q(w∗t − w̄∗t ) is the quantized residual defined for every t such that t ≡ 0 (mod g), then r̂t follows the Irwin-Hall distribution with the parameter 3 + (g−1) and the support being shifted by −0.5 ∗ (3 + (g − 1)).
This leads to optimizing our latent space W?IC for the rate-distortion loss only with one entropy model for v̂t by discarding the last term in (equation 6). During test, the explicit distribution of the residuals can be used for entropy coding.
Stage-Specific entropy models: In Karras et al. (2019), it is shown that each stage/layer of the StyleGAN generator corresponds to a specific scale of details. To leverage this hierarchical structure, we propose stage specific entropy models. Specifically, the first layers, which correspond to coarse resolution (e.g., 42−82) affect mainly high level aspects of the image, such the pose and face shape, while the last layers affect the low level aspects such as textures, colors and small micro structures. Here, we propose to leverage this hierarchical structure and weight the distortion λ differently for each layer of the generator (Note that, the latent code in W+ or W?IC consists of 18 latent codes of dimension 512 and each one corresponds to one layer in the generator, hence its dimension is (18, 512)). For practical reasons, we split the 18 layers in three stages (1− 8, 8− 13, 13− 18), and learn one transformation and entropy model for each group. The distortion λ is chosen to be higher for the first layers and decreases subsequently.
4 EXPERIMENTS
4.1 DATASETS
Celeba-HQ (Karras et al., 2018), a dataset consists of 30000 high quality face images of 1024×1024 resolution.
FILMPAC, a dataset consists of 5 video clips with high resolution and the length varies between 60 and 260 frames. These videos can be found on the filmpac website1 by searching their names (FP006734MD02, FP006940MD02, FP009971HD03, FP010363HD03, FP010263HD03).
MEAD (Wang et al., 2020), a high resolution talking face video corpus for many actors with different emotions and poses. The training dataset for inter-coding consists of 2.5 k videos with frontal poses. For evaluation we created MEAD-inter dataset consisting of 10 videos (selected from MEAD) of different actors with frontal pose. We also created MEAD-intra dataset consisting of 200 frames selected from these videos with frontal pose for evaluating image compression methods.
Dataset preprocessing: All the frames are cropped and aligned using the same prepossessing method as that of FFHQ dataset (Karras et al., 2019), on which the StyleGAN is pretrained. As we compare the reconstructed image with the projected one for SGANC, we project all the frames (encode the original images and reconstruct them using StyleGAN2). All frames are of high resolution (1024× 1024).
4.2 IMPLEMENTATION DETAILS
We used StyleGAN2 generator (G) (Karras et al., 2020) pretrained on FFHQ dataset (Karras et al., 2019). The images are encoded in W+ using a pretrained StyleGAN2 encoder (E) (Richardson et al., 2021). The parameters of the generator and the encoder remain fixed in all the experiments. The latent vector dimension inW+ andW?c is 18× 512. The mapping function T is modeled using RealNVP architecture (Dinh et al., 2017) without batch normalization. It consists of 13 coupling layers, and each coupling layer consists of 3 fully connected (FC) layers for the translation function and
1https://filmpac.com/
3 FC for the scale one with LeakyReLU as hidden activation and Tanh as output one (Total number of trainable parameters=20.5 M). For the entropy model, we have used the fully factorized entropy model (Ballé et al., 2018) based on the implementation from the CompressAI library (Bégaint et al., 2020). For the stage specific entropy model, λ is kept constant for the first stage and then decreased linearly to be 1e−2 smaller for the last layer. For all the experiments, we used Adam optimizer with β1 = 0.9 and β2 = 0.999, learning rate=1e−4 and the batch size=8. Once the training is completed, we used Range Asymmetric Numeral System coder (Duda, 2013) to obtain the bit-stream.
Training: We encoded all the images once, and the training is performed solely using the latent codes. Thus, we do not need the generator nor the encoder during training, which makes the approach light and fast to train. For SGANC, We train on Celeba-HQ dataset. For SGANC IC, we trained on 2.5 k videos from the MEAD dataset (Wang et al., 2020), where each batch contains video slices of size = 9 frames. All the frames are preprocessed as in section 4.1.
4.3 RESULTS
In this section we present the experimental results of our proposed method for image and video compression. We used the following metrics to assess the quality of the compression methods: Peak Signal to Noise Ratio (PSNR), Multi Scale Structural Similarity (MS-SSIM) (Wang et al., 2003), Learned Perceptual Image Patch Similarity (LPIPS) (Zhang et al., 2018), and the Perceptual Information Metric (PIM) with a mixture of 5 Gaussians (Bhardwaj et al., 2020).
LPIPS and PIM are two perceptual metrics; PIM is recently proposed and follow an informationtheoretic approach, inspired from the human perceptual system (Bhardwaj et al., 2020). The authors of LPIPS, on the other hand, show that it is consistent with the human perception. We report the size of the compressed images in bits per pixel (BPP), which is common in evaluating image and video compression.
To exclude the distortion coming from the GAN inversion technique, all the frames are projected (i.e., inverted) using the encoder. The projected frames are used for all approaches in the main paper as a baseline to compute the distortion metrics. For the sake of completion, we also present evaluation results with the original images in the appendix A.4.
4.3.1 IMAGE COMPRESSION
We compare our method with the most recent state-of-the-art codecs such as VTM (VTM, 2020), AV1 (AV1, 2018) and deep compression models such as scale Hyper Prior (HP) (Ballé et al., 2018), factorized entropy model with scale and mean hyperpriors (MeanHP) (Minnen et al., 2018) and the anchor variant of Cheng et al. (2020). MeanHP and (Cheng et al., 2020) are trained with the MSSSIM objective and HP with the MSE (we found that this leads to better results) on the Celeba-HQ dataset. All the methods are evaluated on the independent evaluation dataset: MEAD-Intra and FILMPAC.
Quantitative results: We present the rate distortion curves for different BPP on the MEAD-intra dataset in Figure 3. As can be observed, for the perceptual metrics LPIPS and PIM, our method significantly outperforms the state-of-the-art methods by a large margin. Deep learning based methods are better than traditional codecs for high BPP but they are inferior for lower BPP. This observation highlights the potential of our method to obtain reconstruction with good perceptual quality. For classical metrics, our method is the best at medium and high BPP for MS-SSIM, and has the highest PSNR at high BPP. It is noted that the deep learning based competitors trained with MS-SSIM objective, are still not able to outperform the traditional codecs VTM and AV1. The above observations also holds on the evaluation of the FILMPAC dataset, please refer to figure 17 in the appendix.
Qualitative results: Next we discuss the qualitative results, shown in figure 4. The visual inspection reveals that at a comparable BPP, AV1 introduces blocking artifacts while VTM generates blurry results. The reconstruction of MeanHP is not sharp enough and present distortion in color. Our method delivers perfect reconstruction compared to the projected image and preserves all the details. For example, the details of the eye brows are preserved where as the existing methods failed. Additional images are displayed in appendix A.4.
4.3.2 VIDEO COMPRESSION
We compare our method (SGANC IC) with the most recent and performing two state-of-the-art methods: Versatile Video Coding Test Model (VTM) (VTM, 2020) and H.265 standard (Sullivan et al., 2012a). We used the official implementation for both VTM (Random access with GOP=16) and H.265 (FFmpeg Library). Each method is evaluated on the MEAD-inter dataset.
Quantitative results: Figure 6 presents the quantitative evaluation curves on the MEAD-inter dataset. The computed metrics are first averaged across the frames in each video, and then averaged across all the videos. As similar to the observations in section 4.3.1, our proposed method for video compression achieved impressive performance and significantly outperformed the state-of-the-art methods with respect to perceptual metrics. Especially, with the perceptual distortion metric PIM, our method is significantly better. In terms of PSNR, our method is comparable with H.265 at low bit-rate and achieved better performance at high BPP with both methods.
Qualitative results: Figure 5 compares the reconstruction quality of all the methods with a comparable BPP on a single frame extracted from the compressed video. Our method is almost artifacts free, photorealistic and perceptually more pleasant, while VTM leads to blurry results and H.265 exhibits blocking artifacts. For more visual results please refer to the appendix A.4. Note that, despite the high quality of our method, the reconstruction of StyleGAN (projected vs original) is still not perfect. Once this factor is eliminated, the distortion coming from the quantization (projected vs our method) is negligible which makes our method very promising taking into account of the exponential improvement of GAN generation and inversion.
The ablation studies related to the different distortion losses, the alternative choices of transformation T and other design choices are discussed in the appendix A.3 due to the space limitation.
5 CONCLUSION
We have proposed in this paper a new paradigm for facial image and video compression based on GANs. Our framework is efficient to train and leads to perceptually competitive results compared to the most efficient state-of-the-art compression systems. We believe that at low bitrates, our solution leads to a different and more acceptable type of distortion, since the reconstructed image is very sharp and photorealistic. Our approach is not restricted for faces since GANs have been proposed for various natural objects. For a specific category of object for which a GAN has not been trained already, our approach could be used after training the specific GAN as well as the compression bottelneck. The main limitation of our method is the approximation of any input image using StyleGAN. We believe that the continuous and impressive improvements of GANs inversion and generation will limit this limitation in the near future.
A APPENDIX
The Appendix is organized as follows: in section A.1, we prove that the distribution of residuals follows a known Irwing-Hall distribution, in section A.2 we detail the algorithms for training and testing of the inter coding with residual approach. In section A.3 we detail the ablation study for image and video compression. Finally, in section A.4 we provide more results.
A.1 EXPLICIT DISTRIBUTION OF THE RESIDUALS: PROOF OF LEMMA 1
Let Q(x) = x + be the continuous relaxation of the quantization, and follows the uniform distribution U[−0.5,0.5]. Let the t be the frame index ranging from {0, 1, . . . ,K − 1}.
Let w∗0 ∈ Rn and ŵ∗0 = Q(w∗0). Let us define the following v̂t = Q(w ∗ t −w∗t−1) = w∗t −w∗t−1 + , (7)
w̄t = ŵ ∗ t−1 + v̂t,
Similarly, we define
r̂t = { Q(w∗t − w̄∗t ), ∀ t ≡ 0 (mod g) 0 else
(8)
Now we prove that r̂t follows the Irwin-Hall or the uniform sum distribution (Johnson et al., 1995). we have:
r̂t = Q(w ∗ t − w̄∗t ) = w∗t − w̄∗t + 1 = w∗t − ( ŵ∗t−1 + v̂t ) + 1
= w∗t − ( ŵ∗t−1 +w ∗ t −w∗t−1 + 1 ) + 2 = w∗t − ŵ∗t−1 −w∗t +w∗t−1 − 2 + 1 = w∗t−1 − ŵ∗t−1 − 2 + 1 (9)
For t ≡ 0 (mod g), the reconstructed latent code is computed from the residual, and can be written as:
ŵ∗t = r̂t + w̄ ∗ t
= Q(w∗t − w̄∗t ) + w̄∗t = w∗t − w̄∗t + + w̄∗t = w∗t + (10)
Otherwise, it is the estimated latent code w̄∗t , thus ŵ ∗ t can be written as:
ŵ∗t = { w∗t + , ∀ t ≡ 0 (mod g) w̄∗t else
(11)
For g = 1, equation 9 becomes: r̂t = w ∗ t−1 − ŵ∗t−1 − 2 + 1
= w∗t−1 −w∗t−1 + 3 − 2 + 1 = 3 − 2 + 1 (12)
For g > 1, Let us define the following quantity:
m̂t = { r̂t, ∀t ≡ 0 (mod g) Q(w∗t − w̄∗t ) else
(13)
Using equation 11 and equation 13: r̂t = w ∗ t−1 − ŵ∗t−1 − 2 + 1
= w∗t−1 − w̄∗t−1 − 2 + 1 = m̂t−1 − 2 = ... = m̂t−(g−1) − g−1∑ i=1 i (14)
Similarly to equation 9, we can replace m̂t−(g−1):
r̂t = w ∗ t−g − ŵ∗t−g + 1 − g−1∑ i=1 i (15)
As t− g ≡ 0 (mod g) (as the residual is used only each t ≡ 0 (mod g)), from equation 11, we can replace ŵ∗t−g by its relaxed approximation:
r̂t = w ∗ t−g − ( w∗t−g + 2 ) + 1 − g−1∑ i=1 i
= 1 − 2 + 3 − g−1∑ i=1 i (16)
Which is the sum of 3 + (g − 1) independent random variables following the uniform distribution. We showed r̂t follows the uniform sum distribution, which is the Irwin-Hall distribution IH(x;n) with parameter n = 3 + (g− 1) and x ∈ [−n× 0.5, n× 0.5]. Since the is the uniform distribution U[−0.5,0.5], in our case the support of the Irwin-Hall distribution is shifted by −n× 0.5.
A.2 INTER CODING (SGANC IC)
In this section we present the algorithms for video compression using inter coding with residual during train and test:
Algorithm .1: SGANC IC: during test Result: Compressed frames sequence: {x̂1, x̂2, ..., x̂t−1, x̂t, ..., x̂N}
1 Initialization: frames sequence: {x1,x2, ...,xt−1,xt, ...,xN}, N=number of frames, Encoder (E), Generator (G), Transformation T , entropy coder (EC) and decoder (ED), quantizer (Q), g for residual coding; 2 ŵ∗0 = ED(EC(Q(T (E(x0)))) ; // Intra Coding of the first frame 3 t = 1; 4 while t < N do 5 w∗t = T (E(xt)) ; // Encode the frames and map them to W?c 6 w∗t−1 = T (E(xt−1)); 7 v̂t = ED(EC(Q(w ∗ t −w∗t−1))) ; //Quantize, Compress and Decompress (Receiver) the differences 8 w̄∗t = ŵ ∗ t−1 + v̂t ; // Compute an estimate of the latent code
9 if t%g == 0 then 10 r̂t = ED(EC(Q(w ∗ t − w̄∗t ))) ; // Quantize, Compress and Decompress (Receiver) the residual 11 ŵ∗t = w̄ ∗ t + r̂t ; // Reconstruct the latent code 12 else 13 ŵ∗t = w̄ ∗ t ; 14 end 15 x̂t = G(T−1(ŵ∗t )) ; // Reconstruct the image 16 t = t+ 1; 17 end
Algorithm .2: SGANC IC: during training Result: Transformation (T ), Entropy model (p)
1 Initialization: video dataset encoded as latent codes, where each video= {w1,w2, ...,wt−1,wt, ...}, number of frames in each batch (N ), dataset size (S), Encoder (E), Generator (G); 2 while i < S do 3 t, L = 1, 0; 4 while t < N do 5 w∗t , w ∗ t−1 = T (wt), T (wt−1) ; // Map the latent codes to W?c 6 v̂t = Q(w ∗ t −w∗t−1) ; // Quantize (adding noise) the differences 7 w̄∗t = ŵ ∗ t−1 + v̂t ; // Compute an estimate of the latent code 8 ŵ∗t = w̄ ∗ t ;
9 L = L+ L ; // Compute the loss L (equation 17) 10 t = t+ 1; 11 end 12 Update the parameters of T and p to minimize L; 13 i = i+ 1; 14 end
A.3 ABLATION STUDY
In this section we detail the ablation study for image and video compression.
A.3.1 IMAGE COMPRESSION (DISTORTION LOSS)
In this section we compare different choices of the distortion loss. Specifically, we compare the MSE loss in the image space (Img. D), the MSE loss in the latent spaceW+ (Lat. D), the combination of both (Lat.-Img. D), the LPIPS loss in the image space (LPIPS D) and the combination of the MSE and the LPIPS losses in the image space (LPIPS-Img. D). The implementation details are the same as in section 4.2 except for the choice of the distortion loss and we use a batch size of 4 when training with LPIPS. We compare on both MEAD intra and FILMPAC dataset.
Results: From Figures 7, 8, we can notice that our loss (Lat. D) outperforms the the MSE and the LPIPS distortion in the image space. Moreover, the loss in the image space produces some artifacts and images are blurred while the loss in the latent space is almost artifacts free. There is no benefit of using the loss in the image space in addition to ours. Note that, the training using only the latent loss is faster than others (e.g., the training took; 5 days (LPIPS.-Img. D), 4 days (Lat.-Img. D) and 6 hours (Lat. D)) and occupies smaller space in GPU (e.g., 24 GB with batch size=4 for LPIPS.-Img. D, 10 GB with batch size =4 for Img. D and Lat.-Img. D and less than 2 GB with batch size =8 for Lat. D).
A.3.2 IMAGE COMPRESSION (TRANSFORMATION T )
In this section, we investigate the importance of using a bijective transformation (i.e., Normalizing Flows). To this end, we replace the Real NVP model with an autoencoder (AE) and retrain it with the entropy model on Celeba-HQ with the same implementation details as in section 4.2. The AE consists of 7 layers with the same dimension (i.e., 512) for the encoder and 7 layers with the same dimension for the decoder with ReLU activation functions.
Results: From Figure 9, we can notice that parametrizing the transformation T as Normalizing Flows (i.e., Real NVP) leads to better results. For instance, some facial attributes are changed when using an AE (such as the age and the skin color), as well as the person’s identity.
A.3.3 VIDEO COMPRESSION (INTER CODING: EXPLICIT SPARSIFICATION)
Having only few dimensions that change between two consecutive latent codes is efficient for entropy coding. Thus, we have investigated how to explicitly sparsify these differences and add an L1 regularization on the latent codes differences during training. The loss becomes:
LIC−L1 = LIC + λL1 |w∗t−1 −w∗t | (17)
Note that, as the transformation T and the entropy model are trained jointly, we expect that the latent codes inW?c to be transformed to fit efficiently the entropy model. In the following, we assess whether explicit sparsification brings additional improvement (Section A.3.4).
A.3.4 VIDEO COMPRESSION (INTER CODING)
In this section we present the ablation study for SGANC IC, in which we investigate the effect of the following parameters; parameter g for the residual coding (g), L1 regularization equation 17, residual coding (res) vs intra coding (intra) each g, stage specific entropy model; using different entropy models and distortion λ for the different stages of StyleGAN2 (SS). We thus compare the following variants:
• SGANC IC intra g10: replacing the residual coding at each g = 10 by intra coding using one of the image processing models that we train in section 3.2.
• SGANC IC res g10: doing residual coding each g = 10. • SGANC IC res g80: doing residual coding each g = 80. • SGANC IC res g10 L1: Adding the L1 regularization equation 17 during training and doing
residual coding each g = 10. • SGANC res g10 SS: Using 3 entropy models and 3 T models for each stage of the Style-
GAN2 (1-7, 7-13, 13-18). In addition, training with layer specific distortion lambda; λ = wλ (where w = 1 for the first stage and decrease from 1 to 0.01 for the second and third stage.
• SGANC res g10 SS L1: residual coding each g = 10, stage specific entropy models and L1 regularization.
• SGANC res g2 SS: residual coding each g = 2, stage specific entropy. • SGANC res g1 SS L1: Same as before but doing a residual coding each frame (g = 1).
From Figure 10, we can notice the following:
• Decreasing g leads to better results, at the expense of increased BPP especially for g = 1. • Significant improvements is obtained by performing residual coding (SGANC IC intra g10
vs SGANC IC res g10) and using stage specific entropy models (SGANC IC res g10 vs SGANC IC res g10 SS).
• Only a marginal improvement was obtained by adding the L1 regularization during training (SGANC res g10 vs SGANC res g10 L1). In addition, the improvement becomes negligible when using SS entropy models (SGANC res g10 SS vs SGANC res g10 SS L1).
A.4 MORE RESULTS
In this section we show more quantitative and qualitative results for image and video compression.
A.4.1 IMAGE COMPRESSION
In this section we show the results for image compression (Figures 11, 12, 15, 16, 13, 14). Contrary to the results presented in the main paper, we here present for sake of completeness the results by comparing to the original image (except for our method).
A.4.2 VIDEO COMPRESSION
In this section, we show quantitative and qualitative results for video compression. Contrary to our main results in the paper, we here compress the original video and compute the metrics with original frames (except for our approach still compared to the projected frames).
From Figures 18, 20, 21, 22 and 23, we can notice that VTM and H.265 introduces blocking and blurring artifacts which is not case for our approach SGANC IC. We can also notice similar observations in Figures 24, 25, 26 and 27. From Figure 19, we can notice that our methods are better perceptually (LPIPS and MS-SSIM) than VTM and H.265. | 1. What is the focus and contribution of the paper on video coding?
2. What are the strengths of the proposed approach, particularly in terms of using pre-trained GANs?
3. What are the weaknesses of the paper, especially regarding its novelty compared to prior works?
4. Do you have any concerns about the approach's ability to control the distortion/rate trade-off?
5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Review | Summary Of The Paper
This paper proposes a video coding method based on pre-trained GANs. StyleGAN is used as a backbone for the approach in order to encode streams of faces. A normalizing flow is used to obtain a bijective transformation from the latent space obtained by the backbone generator and the compressed codes latent space. Distortion/rate trade-off is controlled with a loss computed in the latent space which is convenient since it does not require to generate high resolution images to be computed. The compression rate vs quality is controlled at training time via an hyperparameter. Experiments show good results on video and image compression.
Review
The main issue with this work regards novelty. A similar approach was presented in CVPR 2021: https://openaccess.thecvf.com/content/CVPR2021/papers/Liu_Deep_Learning_in_Latent_Space_for_Video_Prediction_and_Compression_CVPR_2021_paper.pdf
In this work a very similar idea is proposed with two main differences:
the approach is not targeted only to encode face streams
the network training is performed in a way that the R-D tradeoff can be achieved at inference time. |
ICLR | Title
Proactive Multi-Camera Collaboration for 3D Human Pose Estimation
Abstract
This paper presents a multi-agent reinforcement learning (MARL) scheme for proactive Multi-Camera Collaboration in 3D Human Pose Estimation in dynamic human crowds. Traditional fixed-viewpoint multi-camera solutions for human motion capture (MoCap) are limited in capture space and susceptible to dynamic occlusions. Active camera approaches proactively control camera poses to find optimal viewpoints for 3D reconstruction. However, current methods still face challenges with credit assignment and environment dynamics. To address these issues, our proposed method introduces a novel Collaborative Triangulation Contribution Reward (CTCR) that improves convergence and alleviates multi-agent credit assignment issues resulting from using 3D reconstruction accuracy as the shared reward. Additionally, we jointly train our model with multiple world dynamics learning tasks to better capture environment dynamics and encourage anticipatory behaviors for occlusion avoidance. We evaluate our proposed method in four photo-realistic UE4 environments to ensure validity and generalizability. Empirical results show that our method outperforms fixed and active baselines in various scenarios with different numbers of cameras and humans. (a) Dynamic occlusions lead to failed reconstruction (b) Constrained MoCap area Active MoCap in the wild Figure 1: Left: Two critical challenges in fixed camera approaches. Right: Three active cameras collaborate to best reconstruct the 3D pose of the target (marked in ).
1 INTRODUCTION
Marker-less motion capture (MoCap) has broad applications in many areas such as cinematography, medical research, virtual reality (VR), sports, and etc. Their successes can be partly attributed to recent developments in 3D Human pose estimation (HPE) techniques (Tu et al., 2020; Iskakov et al., 2019; Jafarian et al., 2019; Pavlakos et al., 2017b; Lin & Lee, 2021b). A straightforward implementation to solve multi-views 3D HPE is to use fixed cameras. Although being a convenient solution, it is less effective against dynamic occlusions. Moreover, fixed camera solutions confine tracking targets within a constrained space, therefore less applicable to outdoor MoCap. On the contrary, active cameras (Luo et al., 2018; 2019; Zhong et al., 2018a; 2019) such as ones mounted on drones can maneuver proactively against incoming occlusions. Owing to its remarkable flexibility, the active approach has thus attracted overwhelming interest (Tallamraju et al., 2020; Ho et al., 2021; Xu et al., 2017; Kiciroglu et al., 2019; Saini et al., 2022; Cheng et al., 2018; Zhang et al., 2021).
∗Equal Contribution. BCorresponding author. Project Website: https://sites.google.com/view/active3dpose
Previous works have demonstrated the effectiveness of using active cameras for 3D HPE on a single target in indoor (Kiciroglu et al., 2019; Cheng et al., 2018), clean landscapes (Tallamraju et al., 2020; Nägeli et al., 2018; Zhou et al., 2018; Saini et al., 2022) or landscapes with scattered static obstacles (Ho et al., 2021). However, to the best of our knowledge, we have not seen any existing work that experimented with multiple (n > 3) active cameras to conduct 3D HPE in human crowd. There are two key challenges : First, frequent human-to-human interactions lead to random dynamic occlusions. Unlike previous works that only consider clean landscapes or static obstacles, dynamic scenes require frequent adjustments of cameras’ viewpoints for occlusion avoidance while keeping a good overall team formation to ensure accurate multi-view reconstruction. Therefore, achieving optimality in dynamic scenes by implementing a fixed camera formation or a hand-crafted control policy is challenging. In addition, the complex behavioural pattern of a human crowd makes the occlusion patterns less comprehensible and predictable, further increasing the difficulty in control. Second, as the team size grows larger, the multi-agent credit assignment issue becomes prominent which hinders policy learning of the camera agents. Concretely, multi-view 3D HPE as a team effort requires inputs from multiple cameras to generate an accurate reconstruction. Having more camera agents participate in a reconstruction certainly introduces more redundancy, which reduces the susceptibility to reconstruction failure caused by dynamic occlusions. However, it consequently weakens the association between individual performance and the reconstruction accuracy of the team, which leads to the “lazy agent” problem (Sunehag et al., 2017). In this work, we introduce a proactive multi-camera collaboration framework based on multi-agent reinforcement learning (MARL) for real-time distributive adjustments of multi-camera formation for 3D HPE in a human crowd. In our approach, multiple camera agents perform seamless collaboration for successful reconstructions of 3D human poses. Additionally, it is a decentralized framework that offers flexibility over the formation size and eliminates dependency on a control hierarchy or a centralized entity. Regarding the first challenge, we argue that the model’s ability to predict human movements and environmental changes is crucial. Thus, we incorporate World Dynamics Learning (WDL) to train a state representation with these properties, i.e., learning with five auxiliary tasks to predict the target’s position, pedestrians’ positions, self state, teammates’ states, and team reward. To tackle the second challenge, we further introduce the Collaborative Triangulation Contribution Reward (CTCR), which incentivizes each agent according to its characteristic contribution to a 3D reconstruction. Inspired by the Shapley Value (Rapoport, 1970), CTCR computes the average weighted marginal contribution to the 3D reconstruction for any given agent over all possible coalitions that contain it. This reward aims to directly associate agents’ levels of participation with their adjusted return, guiding their policy learning when the team reward alone is insufficient to produce such direct association. Moreover, CTCR penalizes occluded camera agents more efficiently than the shared reward, encouraging emergent occlusion avoidance behaviors. Empirical results show that CTCR can accelerate convergence and increase reconstruction accuracy. Furthermore, CTCR is a general approach that can benefit policy learning in active 3D HPE and serve as a new assessment metric for view selection in other multi-view reconstruction tasks. For the evaluations of the learned policies, we build photo-realistic environments (UnrealPose) using Unreal Engine 4 (UE4) and UnrealCV (Qiu et al., 2017). These environments can simulate realistic-behaving crowds with assurances of high fidelity and customizability. We train the agents on a Blank environment and validate their policies on three unseen scenarios with different landscapes, levels of illumination, human appearances, and various quantities of cameras and humans. The empirical results show that our method can achieve more accurate and stable 3D pose estimates than off-the-shelf passive- and active-camera baselines. To help facilitate more fruitful research on this topic, we release our environments with OpenAI Gym-API (Brockman et al., 2016) integration and together with a dedicated visualization tool. Here we summarize the key contributions of our work: • Formulating the active multi-camera 3D human pose estimation problem as a Dec-POMDP and
proposing a novel multi-camera collaboration framework based on MARL (with n ≥ 3). • Introducing five auxiliary tasks to enhance the model’s ability to learn the dynamics of highly
dynamic scenes. • Proposing CTCR to address the credit assignment problem in MARL and demonstrating notable
improvements in reconstruction accuracy compared to both passive and active baselines. • Contributing high-fidelity environments for simulating realistic-looking human crowds with au-
thentic behaviors, along with visualization software for frame-by-frame video analysis.
2 RELATED WORK
3D Human Pose Estimation (HPE) Recent research on 3D human pose estimation has shown significant progress in recovering poses from single monocular images (Ma et al., 2021; Pavlakos et al., 2017a; Martinez et al., 2017; Kanazawa et al., 2018; Pavlakos et al., 2018; Sun et al., 2018; Ci et al., 2019; Zeng et al., 2020; Ci et al., 2020) or monocular video (Mehta et al., 2017; Hossain & Little, 2018; Pavllo et al., 2019; Kocabas et al., 2020). Other approaches utilize multi-camera systems for triangulation to improve visibility and eliminate ambiguity (Qiu et al., 2019; Jafarian et al., 2019; Pavlakos et al., 2017b; Dong et al., 2019; Lin & Lee, 2021b; Tu et al., 2020; Iskakov et al., 2019). However, these methods are often limited to indoor laboratory environments with fixed cameras. In contrast, our work proposes an active camera system with multiple mobile cameras for outdoor scenes, providing greater flexibility and adaptability.
Proactive Motion Capture Few previous works have studied proactive motion capture with a single mobile camera (Zhou et al., 2018; Cheng et al., 2018; Kiciroglu et al., 2019). In comparison, more works have studied the control of a multi-camera team. Among them, many are based on optimization with various system designs, including marker-based (Nägeli et al., 2018), RGBD-based (Xu et al., 2017), two-stage system (Saini et al., 2019; Tallamraju et al., 2019), hierarchical system (Ho et al., 2021), etc. It is important to note that all the above methods deal with static occlusion sources or clean landscapes. Additionally, the majority of these works adopt hand-crafted optimization objectives and some forms of fixed camera formations. These factors result in poor adaptability to dynamic scenes that are saturated with uncertainties. Recently, RL-based methods have received more attention due to their potential for dynamic formation adjustments. These works have studied active 3D HPE in the Gazebo simulation (Tallamraju et al., 2020) or Panoptic dome (Joo et al., 2015; Pirinen et al., 2019; Gärtner et al., 2020) for active view selection. Among them, AirCapRL (Tallamraju et al., 2020) shares similarities with our work. However, it is restricted to coordinating between two cameras in clean landscapes without occlusions. We study collaborations between multiple cameras (n ≥ 3) and resolve the credit assignment issue with our novel reward design (CTCR). Meanwhile, we study a more challenging scenario with multiple distracting humans serving as sources of dynamic occlusions, which requires more sophisticated algorithms to handle.
Multi-Camera Collaboration Many works in computer vision have studied multi-camera collaboration and designed active camera systems accordingly. Earlier works (Collins et al., 2003; Qureshi & Terzopoulos, 2007; Matsuyama et al., 2012) focused on developing a network of pan-tile-zoom (PTZ) cameras. Owing to recent advances in MARL algorithms (Lowe et al., 2017; Sunehag et al., 2017; Rashid et al., 2018; Wang et al., 2020; Yu et al., 2021; Jin et al., 2022), many works have formulated multi-camera collaboration as a multi-agent learning problem and solved it using MARL algorithms accordingly (Li et al., 2020; Xu et al., 2020; Wenhong et al., 2022; Fang et al., 2022; Sharma et al., 2022; Pan et al., 2022). However, most works focus on the target tracking problem, whereas this work attempts to solve the task of 3D HPE. Compared with the tracking task, 3D HPE has stricter criteria for optimal view selections due to correlations across multiple views, which necessitates intelligent collaboration between cameras. To our best knowledge, this work is the first to experiment with various camera agents (n ≥ 3) to learn multi-camera collaboration strategies for active 3D HPE.
3 PROACTIVE MULTI-CAMERA COLLABORATION
This section will explain the formulation of multi-camera collaboration in 3D HPE as a Dec-POMDP. Then, we will describe our proposed solutions for modelling the virtual environment’s complex dynamics and strengthening credit assignment in the multi-camera collaboration task.
3.1 PROBLEM FORMULATION
We formulate the multi-camera 3D HPE problem as a Decentralized Partially-Observable Markov Decision Process (Dec-POMDP), where each camera is considered as an agent which is decentralizedcontrolled and has partial observability over the environment. Formally, a Dec-POMDP is defined as ⟨S,O,An, n, P, r, γ⟩, where S denotes the global state space of the environment, including all human states and camera states in our problem. oi ∈ O denotes the agent i’s local observation, i.e., the RGB image observed by camera i. A denotes the action space of an agent and An represents the joint action space of all n agents. P : S × An → S is the transition probability function P (st+1|st,at), in which at ∈ An is a joint action by all n agents. At each timestep t, every agent obtains a local view oti from the environment s t and then preprocess oti to form i-th agent’s local
observation õti. The agent performs action a t i ∼ πti(·|õti) and receives its reward r(st,at). γ ∈ (0, 1] is the discount factor used to calculate the cumulative discounted reward G(t) = ∑
t′≥t γ t′−t r(t′).
In a cooperative team, the objective is to learn a group of decentralized policies {πi(ati|õti)} n i=1 that maximizes E(s,a)∼π[G(t)]. For convenience, we denote i as the agent index, JnK = {1, . . . , n} is the set of all n agents, and −i = JnK \ {i} are all n agents except agent i. Observation Camera agents have partial observability over the environment. The pre-processed observation õi = (pi, ξi, ξ−i) of the camera agent i consists of: (1) pi, a set of states of visible humans to agent i, containing information, including the detected human bounding-box in the 2D local view, the 3D positions and orientations of all visible humans measured in both local coordinate frame of camera i and world coordinates; (2) own camera pose ξi showing the camera’s position and orientation in world coordinates; (3) peer cameras poses ξ−i showing their positions and orientations in world coordinates and are obtained via multi-agent communication. Action Space The action space of each camera agent consists of the velocity of 3D egocentric translation (x, y, z) and the velocity of 2D pitch-yaw rotation (θ, ψ). To reduce the exploration space for state-action mapping, the agent’s action space is discretized into three levels across all five dimensions. At each timestep, the camera agent can move its position by [+δ, 0,−δ] in (x, y, z) directions and rotate about pitch-yaw axes by [+η, 0,−η] degrees. In our experiments, the camera’s pitch-yaw angles are controlled by a rule-based system.
3.2 FRAMEWORK
This section will describe the technical framework that constitutes our camera agents, which contains a Perception Module and a Controller Module. The Perception Module maps the original RGB images taken by the camera to numerical observations. The Controller Module takes these numerical observations and produces corresponding control signals. Fig. 2 illustrates this framework. Perception Module The perception module executes a procedure consisting of four sequential stages: (1) 2D HPE. The agent performs 2D human detection and pose estimation on the observed RGB image with the YOLOv3 (Redmon & Farhadi, 2018) detector and the HRNet-w32 (Sun et al., 2019) pose estimator, respectively. Both models are pre-trained on the COCO dataset, (Lin et al., 2014) and their parameters are kept frozen during policy learning of camera agents to ensure crossscene generalization. (2) Person ReID. A ReID model (Zhong et al., 2018c) is used to distinguish people in a scene. For simplicity, an appearance dictionary of all to-be-appeared people is built in advance following (Gärtner et al., 2020). At test time, the ReID network computes features for all detected people and identifies different people by comparing features to the pre-built appearance dictionary. (3) Multi-agent Communication. Detected 2D human pose, IDs, and own camera pose are broadcasted to other agents. (4) 3D HPE. 3D human pose is reconstructed via local triangulation after receiving communications from other agents. The estimated position and orientation of a person can then be extracted from the corresponding reconstructed human pose. The communication process is illustrated in Appendix Fig. 9.
Controller Module The controller module consists of a state encoder E and an actor network A. The state encoder E takes õti as input, encoding the temporal dynamics of the environment via LSTM. The future states of the target, pedestrians, and cameras are modelled using Mixture Density Network (MDN) (Bishop, 1994) to account for uncertainty. During model inference, it computes target position prediction, and then the (ϕ, µ, σ) parameters of target prediction MDN are used as a part of the inputs to the actor network to enhance feature encoding. Please refer to Section 3.4 for more details regarding training the MDN.
Feature Embedding zti = MLP(õ t i), (1)
Temporal Modeling hti = LSTM(z t i , h t−1 i ), (2)
Human Trajectory Prediction p̂t+1tgt/pd = MDN(z t i , h t i, p t tgt/pd), (3) Final Embedding eti = E ( õti, h t−1 i ) = Concat(z t i , h t i, {(ϕ, µ, σ)}MDNtgt ) , (4)
where ptgt and ppd refer to the state of the target and the pedestrian, respectively. The actor network A consists of 2 fully-connected layers that output the action, ati = A(E(õ t i, h t−1 i )).
3.3 REWARD STRUCTURE
To alleviate the credit assignment issue that arises in multi-camera collaboration, we propose the Collaborative Triangulation Contribution Reward (CTCR). We start by defining a base reward that reflects the reconstruction accuracy of the triangulated pose generated by the camera team. Then we explain how our CTCR is computed based on this base team reward.
Reconstruction Accuracy as a Team Reward To directly reflect the reconstruction accuracy, the reward function negatively correlates with the pose estimation error (Mean Per Joint Position Error, MPJPE) of the multi-camera triangulation. Formally,
r(X) = { 0, |X| ≤ 1, 1−Gemen (MPJPE(X)), |X| ≥ 2. (5)
Where the set X represents the participating cameras in triangulation, and employs the GemanMcClure smoothing function, Gemen(·) = 2(·/c) 2
(·/c)2+4 , to stabilize policy updates, where c = 50mm in our experiments. However, the shared team reward structure in our MAPPO baseline, where each camera in the entire camera team X receives a common reward r(X), presents a credit assignment challenge, especially when a camera is occluded, resulting in a reduced reward for all cameras. To address this issue, we propose a new approach called Collaborative Triangulation Contribution Reward (CTCR).
Collaborative Triangulation Contribution Reward (CTCR) CTCR computes each agent’s individual reward based on its marginal contribution to the collaborative multi-view triangulation. Refer to Fig. 3 for a rundown of computing CTCR for a 3-cameras team. The contribution of agent i can be measured by:
CTCR(i) = n · φr(i), φr(i) = ∑
S⊆JnK\{i}
|S|!(n− |S| − 1)! n! [r(S ∪ {i})− r(S)], (6)
Where n denotes the total number of agents. S denotes all the subsets of JnK not containing agent i. |S|!(n−|S|−1)!n! is the normalization term. [r(S ∪ {i}) − r(S)] means the marginal contribution of agent i. Note that ∑ i∈JnK φr(i) = r(JnK). We additionally multiply a constant n to rescale the CTCR to have the same scale as the team reward. Especially in the 2-cameras case, the individual CTCR should be equivalent to the team reward, i.e., CTCR(i = 1) = CTCR(i = 2) = r({1, 2}). For more explanations on CTCR, please refer to Appendix Section G.
3.4 LEARNING MULTI-CAMERA COLLABORATION VIA MARL
We employ the multi-agent learning variant of PPO (Schulman et al., 2017) called Multi-Agent PPO (MAPPO) (Yu et al., 2021) to learn the collaboration strategy. Alongside the RL loss, we jointly train the model with five auxiliary tasks that encourage comprehension of the world dynamics and the stochasticity in human behaviours. The pseudocode can be found in Appendix A.
World Dynamics Learning (WDL) We use the encoder’s hidden states (zti , hti) as the basis to model the world. Three WDL objectives correspond to modelling agent dynamics : (1) learning the forward dynamics of the camera P1(ξt+1i |zti , hti, ati), (2) prediction of team reward P2(rt|zti , hti, ati), (3) prediction of future position of peer agents P3(ξt+1−i |zti , hti, ati). Two WDL objectives correspond to modelling human dynamics: (4) prediction of future position of target person P4(pt+1tgt |zti , hti, pttgt), (5) prediction of future position of pedestrians, P5(pt+1pd |zti , hti, ptpd). All the probability functions above are approximated using Mixture Density Networks (MDNs) (Bishop, 1994).
Total Training Objectives LTrain = LRL + λWDL LWDL. The LRL is the reinforcement learning loss consisting of PPO-Clipped loss and centralized-critic network loss similar to MAPPO (Yu et al., 2021). LWDL = − 1n ∑ l λl ∑ i E[logPl(·|õti, hti, ati)] is the world dynamics learning loss that consists of MDN supervised losses on the five prediction tasks mentioned above.
4 EXPERIMENT
In this section, we first introduce our novel environment, UNREALPOSE, used for training and testing the learned policies. Then we compare our method with multi-passive-camera baselines and perform an ablation study on the effectiveness of the proposed CTCR and WDL objectives. Additionally, we evaluate the effectiveness of the learned policies by comparing them against other active multi-camera methods. Lastly, we test our method in four different scenarios to showcase its robustness.
4.1 UNREALPOSE: A VIRTUAL ENVIRONMENT FOR PROACTIVE HUMAN POSE ESTIMATION
We built four virtual environments for simulating active HPE in the wild using Unreal Engine 4 (UE4), which is a powerful 3D game engine that can provide real-time and photo-realistic renderings for making visually-stunning video games. The environments handle the interactions between realisticbehaving human crowds and camera agents. Here is a list of characteristics of UNREALPOSE that we would like to highlight: Realistic: Diverse generations of human trajectories, built-in collision avoidance, and several scenarios with different human appearance, terrain, and level of illumination. Flexibility: extensive configuration in numbers of humans, cameras, or their physical properties, more than 100 MoCap action sequences incorporated. RL-Ready: integrated with OpenAI Gym API, overhaul the communication module in the UnrealCV (Qiu et al., 2017) plugin with inter-process communication (IPC) mechanism. For more detailed descriptions, please refer to Appendix Section B.
4.2 EVALUATION METRICS
We use Mean Per Joint Position Error (MPJPE) as our primary evaluation metric, which measures the difference between the ground truth and the reconstructed 3D pose on a per-frame basis. However, using MPJPE alone may not provide a complete understanding of the robustness of a multi-camera collaboration policy for two reasons: Firstly, cameras adjusting their perception quality may take multiple frames to complete, and secondly, high peaks in MPJPE may be missed by the mean aggregation. To address this, we introduce the “success rate” metric, which evaluates the smooth execution and robustness of the learned policies. Success rate is calculated as the ratio of frames in an episode with MPJPE lower than τ . Formally, SuccessRate(τ) = P (MPJPE ≤ τ). This metric is a temporal measure that reflects the integrity of multi-view coordination. Poor coordination may cause partial occlusions or too many overlapping perceptions, leading to a significant increase in MPJPE and a subsequent decrease in the success rate.
4.3 RESULTS AND ANALYSIS
The learning-based control policies were trained in a total of 28 instances of the BlankEnv, where each instance uniformly contained 1 to 6 humans. Each training run consisted of 700,000 steps, which corresponds to 1,000 training iterations. To ensure a fair evaluation, we report the mean metrics based on the data from the latest 100 episodes, each comprising 500 steps. The experiments were conducted in a 10m× 10m area, where the cameras and humans interacted with each other.
Active vs. Passive To show the necessity of proactive camera control, we compare the active camera control methods with three passive methods, i.e., Fixed Camera, Fixed Camera (RANSAC), and Fixed Camera (PlaneSweepPose). “Fixed Cameras” denotes that the poses of the cameras are fixed, hanging 3m above ground and −35◦ camera pitch angles. The placements of these fixed cameras are carefully determined with strong priors, e.g., right-angle, triangle, square, and pentagon formations for 2, 3, 4, and 5 cameras, respectively. “RANSAC” denotes the method that uses RANSAC (Fischler & Bolles, 1981) for enhanced triangulation. “PlaneSweepPose” represents the off-the-shelf learning-based method (Lin & Lee, 2021b) for multi-view 3D HPE. Please refer to Appendix E.2 for more implementation details. We show the MPJPE and Success Rate versus a
different number of cameras in Fig. 5. We observe that all passive baselines are being outperformed by the active approaches due to their inability to adjust camera views against dynamic occlusions. The improvement of active approaches is especially significant when the number of cameras is less, i.e. when the camera system has little or no redundancy against occlusions. Notably, the MPJPE attained by our 3-camera policy is even lower than the MPJPE from 5 Fixed cameras. This suggests that proactive strategies can help reduce the number of cameras necessary for deployment.
The Effectiveness of CTCR and WDL We also perform ablation studies on the two proposed modules (CTCR and WDL) to analyze their benefits to performance. We take “MAPPO” as the active-camera baseline for comparison, which is our method but trained instead by a shared global reconstruction reward and without world dynamics modelling. Fig. 5 shows a consistent performance gap between the “MAPPO” baseline and our methods (MAPPO + CTCR + WDL). The proposed CTCR mitigates the credit assignment issue by computing the weighted marginal contribution of each camera. Also, CTCR promotes faster convergence. Training curves are shown in Appendix Fig. 13. Training with WDL objectives further improves the MPJPE metric for our 2-Camera model. However, its supporting effect gradually weakens with the increasing number of cameras. We argue that is caused by the more complex dynamics involved with more cameras simultaneously interacting in the same environment. Notably, we observe that the agents trained with WDL are of better generalization in unseen scenes, as shown in Fig. 6.
Versus Other Active Methods To show the effectiveness of the learned policies, we further compare our method with other active multi-camera formation control methods in 3-cameras BlankEnv. “MAPPO” (Yu et al., 2021) and AirCapRL (Tallamraju et al., 2020) are two learning-based methods based on PPO (Schulman et al., 2017). The main difference between these two methods is the reward shaping technique, i.e., AirCapRL additionally employs multiple carefully-designed rewards (Tallamraju et al., 2020) for learning. We also programmed a rule-based fixed formation control method (keeping an equilateral triangle, spread by 120◦) to track the target person. Results are shown in Table 1. Interestingly, these three baselines achieve comparable performance. Our method outperforms them, indicating a more effective multi-camera collaboration strategy for 3D HPE. For example, our method learns a spatially spread-out formation while automatically adjusting to avoid impending occlusion.
Generalize to Various Scenarios We train the control policies in BlankEnv while testing them in three other realistic environments (SchoolGym, UrbanStreet, and Wilderness) to evaluate their generalizability to unseen scenarios. Fig. 6 shows that our method consistently outperforms baseline
methods with lower variance in MPJPE during the evaluations in three test environments. We report the results in the BlankEnv as a reference.
Qualitative Analysis In Figure 7, we show six examples of the emergent formations of cameras under the trained policies using the proposed methods (CTCR + WDL). The camera agents learn to spread out and ascent above humans to avoid occlusions and collision. Their placements in an emergent formation are not assigned by other entities but rather determined by the decentralized control policies themselves based on local observations and agent-to-agent communication. For more vivid examples of emergent formations, please refer to the project website for the demo videos.1 For more analysis on the behaviour mode of our 3-, 4- and 5-camera models, please refer to Appendix Section H.
5 CONCLUSION AND DISCUSSION
To our knowledge, this paper presents the first proactive multi-camera system targeting the 3D reconstruction in a dynamic crowd. It is also the first study regarding proactive 3D HPE that promptly experimented with multi-camera collaborations at different scales and scenarios. We propose CTCR to alleviate the multi-agent credit assignment issue when the camera team scales up. We identified multiple auxiliary tasks that improve the representation learning of complex dynamics. As a final note, we release our virtual environments and the visualization tool to facilitate future research.
Limitations and Future Directions Admittedly, a couple of aspects of this work have room for improvement. Firstly, the camera agents received the positions for their teammates via non-disrupting broadcasts of information, which may be prone to packet losses during deployment. One idea is to incorporate a specialized protocol for multi-agent communication into our pipeline, such as ToM2C(Wang et al., 2022). Secondly, intended initially to reduce the communication bandwidth (for example, to eliminate the need for image transmission between cameras), the current pipeline comprises a Human Re-Identification module requiring a pre-scan memory on all to-be-appeared human subjects. For some out-of-distribution humans’ appearances, the current ReID module may not be able to recognize. Though ACTIVE3DPOSE can accommodate a more sophisticated ReID module (Deng et al., 2018) to resolve this shortcoming. Thirdly, the camera control policy requires accurate camera pose, which may need to develop a robust SLAM system (Schmuck & Chli, 2017; Zhong et al., 2018b) to work in dynamic environments with multiple cameras. Fourthly, the motion patterns of targets in virtual environments are based on manually designed animations, which will lead to the poor generalization of the agents on unseen motion patterns. In the future, we can enrich the diversity by incorporating a cooperative-competitive multi-agent game (Zhong et al., 2021) in training. Lastly, we assume near-perfect calibrations for a group of mobile cameras, which might be complicated to sustain in practice. Fortunately, we are seeing rising interest in parameter-free pose estimation (Gordon et al., 2022; Ci et al., 2022), which does not require online camera calibration and may help to resolve this limitation.
1Project Website for demo videos: https://sites.google.com/view/active3dpose
6 ETHICS STATEMENT
Our research into active 3D HPE technologies has the potential to bring many benefits, such as biomechanical analysis in sports and automated video-assisted coaching (AVAC). However, we recognize that these technologies can also be misused for repressive surveillance, leading to privacy infringements and human rights violations. We firmly condemn such malicious acts and advocate for the fair and responsible use of our virtual environment, UNREALPOSE, and all other 3D HPE technologies.
ACKNOWLEDGEMENT
The authors would like to thank Yuanfei Wang for discussions on world models; Tingyun Yan for his technical support on the first prototype of UNREALPOSE. This research was supported by MOST-2022ZD0114900, NSFC-62061136001, China National Post-doctoral Program for Innovative Talents (Grant No. BX2021008) and Qualcomm University Research Grant.
A TRAINING ALGORITHM PSUEDOCODE
Algorithm 1 Learning Multi-Camera Collaboration (CTCR + WDL) 1: Initialize: n agents with a tied-weights MAPPO policy π and mixture density models (MDN) for WDL
prediction models {(Pself, Preward, Ppeer, Ptgt, Ppd)}π , E parallel environment rollouts 2: for Iteration = 1, 2, . . .M do 3: In each E environment rollouts, each agent i ∈ JnK collects trajectory with length T :
τ = [ (õti, õ t+1 i , a t i, r t i , h t−1 i , s t,at−1, rt) ]T t=1
4: Substitute individual reward of each agent rt with CTCR: rti ← CTCR(i) (equation 6) 5: For each step in τ , compute advantage estimates Â1i , . . . , Â T i with GAE (Schulman et al., 2015) for
each agent i ∈ JnK 6: Yield a training batch D of the size E × T × n 7: for Mini-batch SGD Epoch = 1, 2, . . . ,K do 8: Sample a stochastic mini-batch of size B form D, which B = |D|/K 9: Compute zti and h t i using the encoder model Eπ
10: Compute PPO-CLIP objective loss LPPO, global critic value loss LValue, adaptive KL loss LKL 11: Compute the objectives that constitutes LWDL:
• Self-State Prediction Loss Lself = −Eτ [logP (ξt+1i |z t i , h t i, a t i)] • Reward Prediction Loss Lreward = −Eτ [logP (rt|zti , hti, ati)] • Peer-State Prediction Loss Lpeer = −Eτ [logP (ξt+1−i |z t i , h t i, a t i)] • Target Prediction Loss Ltgt = −Eτ [logP (pt+1tgt |zti , hti, pttgt)] • Pedestrians Prediction Loss Lpd = −Eτ [logP (pt+1pd |z t i , h t i, p t pd)]
12: LTrain = λPPOLPPO + λValueLValue + βKLLKL + λWDLLWDL 13: Optimize LTrain w.r.t to the current policy parameter θπ
B UNREALPOSE: ACCESSORIES AND MISCELLANEOUS ITEMS
Our UnrealPose virtual environment supports different active vision tasks, such as active human pose estimation and active tracking. This environment also supports various settings ranging from single-target single-camera settings to multi-target multi-camera settings. Here we provide a more detailed description of the three key characteristics of UnrealPose:
Realistic The built-in navigation system governs the collision avoidance movements of virtual humans against dynamic obstacles. In the meantime, it also ensures diverse generations of walking trajectories. These features enable the users to simulate a realistic-looking crowd exhibiting socially acceptable behaviors. We have also provided several pre-set scenes, e.g., school gym, wilderness, urban crossing, and etc. These scenes have notable differences in illuminations, terrains, and crowd appearances to reflect the dramatically different-looking scenarios in real-life.
Extensive Configuration The environment can be configured with different numbers of humans and cameras and swapped across other scenarios with ease, which we demonstrated in Fig 4(a-d). Besides simulating walking human crowds, the environment incorporates over 100 Mocap action sequences with smooth animation interpolations to enrich the data variety for other MoCap tasks.
RL-Ready We use the UnrealCV (Qiu et al., 2017) plugin as the medium to acquire images and annotations from the environment. The original UnrealCV plugin suffers from unstable data transfer and unexpected disconnections under high CPU workloads. To ensure fast and reliable data acquisition for large-scale MARL experiments, we overhauled the communication module in the UnrealCV plugin with inter-process communication (IPC) mechanism, which eliminates the aforementioned instabilities.
B.1 VISUALIZATION TOOL
We provide a visualization tool to facilitate per-frame analysis of the learned policy and reconstruction results (shown in Fig. 8). The main interface consists of four parts: (1) Live 2D views from all cameras. (2) 3D spatial view of camera positions and reconstructions. (3) Plot of statistics. (4) Frame control bar. This visualization tool supports different numbers of humans and cameras. Meanwhile, it is written in Python to support easy customization.
B.2 LICENSE
All assets used in the environment are commercially-available and obtained from the UE4 Marketplace. The environment and tools developed in this work are licensed under Apache License 2.0.
C OBSERVATION PROCESSING
Fig.9 shows the pipeline of the observation processing. Each camera observes an RGB image and detects the 2D human poses and IDs via the Perception Module described in the main paper. The camera pose, 2D human poses and IDs are then broadcast to other cameras for multi-view 3D triangulation. The human position is calculated as the median of the reconstructed joints. Human orientation is calculated from the cross product of the reconstructed shoulder and spine.
D IMPLEMENTATION DETAILS
D.1 TRAINING DETAILS
All control policies are trained in the BlackEnv scene. At the testing stage, we apply zeroshot transfer for the learned policies to three realistic scenes: SchoolGym, UrbanStreet, and Wilderness.
To simulate a dynamic human crowd that performs high random behaviors, we sample arbitrary goals for each human and employ the built-in navigation system to generate collision-free trajectories. Each human walks at a random speed. To ensure the generalization across a different number of humans, we train our RL policy with a mixture of environments of 1 to 6 humans. The learning rate is set to 5× 10−4 with scheduled decay during the training phase. The annealing schedule for the learning rate is detailed in Table. 2. The maximum episode length is 500 steps, discounted factor γ is 0.99 and the GAE horizon is 25 steps. Each sampling iteration produces a training batch with the size of 700 steps, then we perform 16 iterations on every training batch with 2 SGD mini-batch updates for each iteration (i.e. SGD batch size = 350).
Table 2 shows the common training hyper-parameters shared between the baseline models (MAPPO) and all of our methods. Table. 4 shows the hyperparameters for the WDL module.
D.2 DIMENSIONS OF FEATURE TENSORS IN THE CONTROLLER MODULE
Table 3 serves as a complementary description for Eqn. 1, 2, 3, 4. The table shows the dimensions of the feature tensors used in the controller module.“B” denotes the batch size. In the current model design, the dimension of local observation is adjusted based on the maximum number of camera agents (N_cammax) and the maximum number of observable humans (N_humanmax) in an environment. In our experiments, N_humanmax has been set to 7. The observation pre-processor will zero-pad each observation to a length equal N_humanmax × 18 if the current environment
instance has less than N_humanmax humans. “9” and “18” correspond to the feature length for a camera and a human, respectively. The MDN of the Target Prediction module has 16 Gaussian components, in which each component outputs (ϕ, µx, σx, µy, σy) and ϕ is the weight parameter of a component. The current implementation of MDN only predicts the x-y location of a human, which is a simplification since the z coordinate of a simulated human barely changes across an episode compared to the x and y coordinates. The dimension of an MDN output has a length of 80 and the exact prediction is produced by ϕ-weighted averaging. In Eqn. 4, {(ϕ, µ, σ)}MDNtgt is an encoded feature produced by passing the MDN output to a 2-layer MLP that has an output dimension of 128.
D.3 COMPUTATIONAL RESOURCES
We used 15 Ray workers for each experiment to ensure consistency in the training procedure. Each worker carries a Gym vectorized environment consisting of 4 actual environment instances. Each worker demands approximately 3.7GB of VRAM. We run each experiment with 8 NVIDIA RTX 2080 Ti GPUs. Depending on the number of camera agents, the total training hours required for an experiment to run 500k steps will vary between 4 to 12 hours.
D.4 TRAINING CURVE
Fig. 13 shows the training curves of our method and the baseline method. We can find that our methods converge faster and improve reconstruction accuracy than the baseline method.
D.5 TOTAL INFERENCE TIME
Our solution can run in real-time. Table 5 reports the inference time for each module of the proposed Active3DPose pipeline.
E ADDITIONAL EXPERIMENT RESULTS
E.1 ABLATION STUDY ON WDL OBJECTIVES
We perform a detailed ablation study regarding the effect of each WDL sub-task on the model performance. As shown in Fig. 11, we can observe that the MPJPE metric gradually decreases as we incorporate more WDL losses. This aligns with our assumptions that training the model with world dynamics learning objectives will promote the model’s ability to capture a better representation of future states, which in turn increases performance. Our method additionally demonstrates the importance of incorporating information regarding the target’s future state into the encoder’s output features. Predicting the target’s future states should not only be used as an auxiliary task but should also directly influence the inference process of the actor model.
E.2 BASELINE — FIXED-CAMERAS
In addition to the triangulation and RANSAC baselines introduced in the main text, we compare and elaborate on two more baselines that use fixed cameras: (1) fixed cameras with RANSAC-based triangulation and temporal smoothing (TS) (2) an off-shelf 3D pose estimator PlaneSweepPose (Lin & Lee, 2021a).
In the temporal smoothing baseline, we applied a low-pass filter (Casiez et al., 2012) and temporal fusion where the algorithm will fill in any missing key points in the current frame with the detected key points from the last frame.
In the PlaneSweep baseline, as per the official instructions, we train three separate models (3 cams to 5 cams) with the same camera setup as in our testing scenarios. We have tested the trained models in different scenarios and reported the MPJPE results in Table 6, 7, 8 and 9. Note that, this off-shelf pose estimator performs better than the Fixed-Camera Baseline (Triangulation) but still underperforms compared to our active method. Fig. 12 illustrates the formations of the fixed camera baselines.
Camera placements for all fixed-camera baselines are shown in Fig. 12. These formations are carefully designed not to disadvantage the fixed-camera baselines on purpose. Especially for the 5-cameras pentagon formation, which helps the fixed-camera baseline to obtain satisfactory performance on the 5-camera setting as shown in Tables 6, 7, 8 and 9.
E.3 OUR METHOD ENHANCED WITH RANSAC-BASED TRIANGULATION
RANSAC is a generic technique that can be used to improve triangulation performance. In this experiment, we also train and test our model with RANSAC. The final result (Table 11) shows a further improvement on our original triangulation version.
F GENERATING SAFE AND SMOOTH TRAJECTORIES
F.1 COLLISION AVOIDANCE
In order to generate safe trajectories, in this section, we introduce and evaluate two different ways to enforce collision avoidance between cameras and humans.
Obstacle Collision Avoidance (OCA) OCA resembles a feed-forward PID controller on the final control outputs before execution. Concretely, OCA adds a constant reverse quantity to the control output if it detects any surrounding objects within its safety range. This “detouring” mechanism safeguards the cameras from possible collisions and prevents them from making dangerous maneuvers.
Action-Masking (AM) AM also resembles a feed-forward controller but is instead embedded into the forward step of the deep-learning model. At each step, AM module first identifies the dangerous actions among all possible actions, then modifies the probabilities (output by the policy model) of choosing the hazardous actions to be zero so that the learning-based control policy will never pick them. Note that AM must be trained with the MARL policy model.
We proposed the minimum Camera-Human distance (in which the scope includes the target and the pedestrians) as the safety metric. It measures the distance between the closest human and the camera at a timestep. Fig. 14 shows the histograms of Min Camera-Human distance sampled over five episodes of 500 steps.
F.2 TRAJECTORY SMOOTHING
Exponential Moving Average (EMA) To generate smooth trajectories, we introduce EMA to smooth the outputs of our learned policy model. EMA is a common technique used in smoothing time-series data. In our case, the smoothing operator is defined as :
ât = ât−1 + η · (at − ât−1)
Where at is the action (or control signal) output by the model at the current step, ât−1 is the smoothed action from the last step and η is the smoothing factor. A smaller η results in greater smoothness. ât is the smoothed action that the camera will execute.
F.3 ROBUSTNESS OF THE LEARNED POLICY
In this section, we evaluate the robustness of our model on different external perturbations to the control signal. In conclusion, our model shows resilience against the delay effect and random noise.
Delay EMA also brings a delay effect while smoothing the generated trajectory. The level of smoothing positively correlates with a larger delay factor. Here, we evaluate our model’s robustness to the EMA simulated delay.
Random Action Noise Control devices in real-life are inevitably affected by errors. For example, the control signal of a controller may be over-damped or may overshoot. We simulated this type of random error by multiplying the output action by a uniformly-sampled noise.
In Table 12, we intend to observe the effects of EMA delay and random action noise on the reconstruction accuracy of our model, marked as “Vanilla”.
G MORE EXPLANATIONS ON CTCR
Figure 3 is an example of using Eq. 6 to compute CTCR for each of the three cameras. The CTCR is incentivized by the Shapley Value.The main idea is that the overall optimality needs to also account for the optimality of every possible sub-formation. For a camera agent to receive the highest CTCR possible, its current position and view must be optimal both in terms of its current formation and any sub-formation possible.
Note: a group of collaborating players is often referred to as a “coalition” in other literatures. Here we apply the same concept but to a group of cameras, so we used the more initiative term “formation” instead.
Eq. 6 can be further breakdown as follows:
φr(i) = ∑
S⊆JnK\{i}
|S|!(n− |S| − 1)! n! [r(S ∪ {i})− r(S)]
= 1
n ∑ S⊆JnK\{i} |S|!(n− 1− |S|)! (n− 1)! [r(S ∪ {i})− r(S)]
= 1
n ∑ S⊆JnK\{i} ( n− 1 |S| )−1 [r(S ∪ {i})− r(S)]
where JnK = 1, 2, . . . , n denotes the set of all cameras and the binomial coefficient ( k n ) =
n! k!(n−k)! , 0 ≤ k ≤ n. S denotes a formation (a subset) without camera i. ( n−1 |S| ) is the number of combinations of subset S (i.e., the binomial coefficient), which serves as a normalization term. r(S) computes the reconstruction accuracy of the formation S. And [r(S ∪ {i})− r(S)] computes the marginal improvement after adding the camera i to sub-formation S. So this equation means we iterate over all possible S and compute the marginal contribution of camera i and average over all possible combinations of (S, i).
Suppose we have a 3-cameras formation, similarly shown in Figure 3. So n = 3, the number of cameras. Let’s name these cameras (1,2,3), and let’s say we only care about Camera 1 for now. Since we are computing the average marginal contribution for Camera 1, we are looking for those formations that do not have Camera 1 because we want to see how much of an increase in performance resulted from the addition of Camera 1 to those formations. For all possible formations denoted by the set JnK, four formations satisfy this condition, S ⊆ JnK \ {i} −→ S ∈ (∅, {2}, {3}, {2, 3}). The binomial coefficient ( n−1 |S| ) for a 2-camera sub-formation in a 3-cameras case is ( 2 2 ) , which makes sense because there exists only one unique combination that does not contain Camera 1, which is sub-formation {2, 3}. r({2, 3}) computes the reconstruction accuracy of the formation {2, 3} and r({2, 3} ∪ {1}) computes the reconstruction accuracy after adding Camera 1 to sub-formation {2, 3}. Their difference gives us the marginal contribution of Camera 1. As we sum over all subsets S of JnK not containing Camera 1, and then divide by ( n−1 |S| ) and the number of cameras n, we have the average marginal contribution of Camera 1 (φr({1})) to the collaborative triangulated reconstruction. Further multiplying this term by n, you have the CTCR1 as shown in Eq. 6.
H ANALYSIS ON MODES OF BEHAVIORS OF THE TRAINED AGENTS
Here in Figure 15 we provide statistics and analysis on the behaviour mode of the agents controlled by our 3-, 4- and 5-camera policies, respectively. We are interested in understanding the characteristics of the emergent formations learned by our model. Hence we proposed three quantitative measures to understand the topology of the emergent formations: (1) the min-angle between the cameras’ orientation and the per-frame-mean of the min-camera angle, (2) the camera’s pitch angle, and (3) the camera-human distance. Hence we provide rigorous definitions as follows:
min-camera angle(i) = min j ̸=i
⟨axis of camera i, axis of camera j⟩
per-frame-mean of min-camera angle = 1
n ∑ i∈[n] min-camera angle(i)
In simpler terms, min-camera angle(i) finds the minimum angle between camera i and any other camera j. “per-frame-mean of min-camera angle” is the mean of min-camera angle(i) for all camera i in one frame. A positive camera’s pitch angle means looking upward, and a negative camera’s pitch angle means looking downward. The camera-human distance measures the distance between the target human and the given camera.
Regarding the distance between cameras and humans, the camera agents actively adjust their position relative to the target human. The cameras learn to keep a camera-human distance between 2m and 3m. It is neither too distant from the target human nor too close, violating the safety constraint. The camera agents surround the target at an appropriate distance to have a better resolution on the 2D views. In the meantime, as the safe-distance constraint is enforced during training, the camera agents are prohibited from aggressively minimizing the camera-human distance.
The histograms for the camera’s pitch angle suggest that the cameras mostly maintain negative pitch angles. Their preferred strategy is to hover over the humans and capture images at a higher altitude. This is likely because of emergent occlusion avoidance. The cameras also emerge to fly at an even level with the humans (where the pitch angle approximately equals 0) to capture more accurate 2D poses of the target human. This propensity is apparent from the peaks at x = 0 angles in the histograms. A relatively wide distribution of the pitch angle histogram suggests that the camera formation is spread out in the 3D space and dynamic adjustments of flying heights and pitch angles by the camera agents.
For the average angle between the cameras’ orientations, this statistic shows that the cameras in various formations will maintain reasonable non-zero spatial angles between each other. Therefore, their camera views are less likely to coincide and provide more diverse perspectives to generate a more reliable 3D triangulation. | 1. What is the focus and contribution of the paper regarding active multi-camera motion capture?
2. What are the strengths and weaknesses of the proposed method, particularly in terms of generalizability and understanding of the learned policy?
3. Do you have any minor questions regarding the paper, such as the limitation of rotation to 2D and the sudden increase in error for MAPPO at 5-6 humans?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper describes a method for active (cameras can move proactively) multi-camera motion capture. The problem is formulated under multi-agent reinforcement learning framework, where the authors propose a new reward (Collaborative Triangulation Contribution Reward) to incentivize agents according to their weighted average marginal contribution to the 3D reconstruction. The proposed method is evaluated on UE4 environments, where the method has demonstrated notable improvements over baseline, and competing methods.
Strengths And Weaknesses
Strength
The methodology is clearly motivated, and I found it quite novel to use multi-agent reinforcement learning for multi-camera motion capture.
The method is extensively evaluated with plenty of experiments and analysis.
The proposed virtual environment also seems a very useful tool, and can facilitate future research in the direction if this can be open-sourced.
Weakness
Though it's a very interesting work, I am mainly concerned with two aspects: generalizability, and the understanding of the learned policy seems limited.
Generalizability: as the model training seems only possible with virtual environments, and the real world can exhibit significant gaps with such environments, e.g. different body motion, different type of occlusion etc, it's not clear whether the learned policy can be generalizable to real environments. The experiments in Fig7 are helpful, I think it would be better to also add an upperbound model ("ours" trained/finetuned in the same environment) to get a sense of how the model is impacted by the domain gap. Nevertheless, I think the experiment only provides a partial view, still more evidence (e.g. experiments on real datasets) may be needed to validate the generalizability.
Understanding of the learned policy is limited: I appreciate the amount of experiments provided. But still it is not fully clear what is actually learned by the policy and why it is better than the baseline. It would be helpful if authors can provide more statistics/analysis on the behavior mode of the agents and visualizations. Besides, it would also be useful to provide experiments with more cameras until the performance gap between fixed cam/learned policy is closed. (I would expect with more cameras, it will spread out to cover most views, thus the improvement from a learner polocy can become marginal)
Other questions
I have also several minor questions, would be good if the authors can discuss them:
wrt 3.1 Action Space, why rotation is limited to 2D and there is no roll?
wrt fig6, there is a sudden increase in error for MAPPO at 5-6 human, it would be interesting to understand why this happens, and to provide some illustration to demonstrate the error mode. Also, would be useful to provide the same visualization for MAPPO + RLPred to understand why RLPred is so helpful in such cases.
wrt 3.2 Perception Module, it can be helpful to provide an ablation study on used 2D pose models, and see how the errors in 2D pose would impact the learned policy. Currently it's not very clear whether the policy learned to encourage collaboration among cameras, or it's more encouraging camera to be at a view more optimal for 2D pose model. Would be interesting to provide some results with GT 2D pose (from virtual environment), and some cross results to test the generalization ability (e.g. learned from GT 2D pose, test with yolo v3 etc.)
Clarity, Quality, Novelty And Reproducibility
Clarity
The paper is clearly presented, I also appreciate that plenty of details are given.
Novelty
As also mentioned above, I found it quite novel to use multi-agent reinforcement learning for multi-camera motion capture.
Reproducibility
Though I am not confident that the work can be reproduced 100% without available implementation as a reference, overall I am satisfied with the amount of details given. |
ICLR | Title
Proactive Multi-Camera Collaboration for 3D Human Pose Estimation
Abstract
This paper presents a multi-agent reinforcement learning (MARL) scheme for proactive Multi-Camera Collaboration in 3D Human Pose Estimation in dynamic human crowds. Traditional fixed-viewpoint multi-camera solutions for human motion capture (MoCap) are limited in capture space and susceptible to dynamic occlusions. Active camera approaches proactively control camera poses to find optimal viewpoints for 3D reconstruction. However, current methods still face challenges with credit assignment and environment dynamics. To address these issues, our proposed method introduces a novel Collaborative Triangulation Contribution Reward (CTCR) that improves convergence and alleviates multi-agent credit assignment issues resulting from using 3D reconstruction accuracy as the shared reward. Additionally, we jointly train our model with multiple world dynamics learning tasks to better capture environment dynamics and encourage anticipatory behaviors for occlusion avoidance. We evaluate our proposed method in four photo-realistic UE4 environments to ensure validity and generalizability. Empirical results show that our method outperforms fixed and active baselines in various scenarios with different numbers of cameras and humans. (a) Dynamic occlusions lead to failed reconstruction (b) Constrained MoCap area Active MoCap in the wild Figure 1: Left: Two critical challenges in fixed camera approaches. Right: Three active cameras collaborate to best reconstruct the 3D pose of the target (marked in ).
1 INTRODUCTION
Marker-less motion capture (MoCap) has broad applications in many areas such as cinematography, medical research, virtual reality (VR), sports, and etc. Their successes can be partly attributed to recent developments in 3D Human pose estimation (HPE) techniques (Tu et al., 2020; Iskakov et al., 2019; Jafarian et al., 2019; Pavlakos et al., 2017b; Lin & Lee, 2021b). A straightforward implementation to solve multi-views 3D HPE is to use fixed cameras. Although being a convenient solution, it is less effective against dynamic occlusions. Moreover, fixed camera solutions confine tracking targets within a constrained space, therefore less applicable to outdoor MoCap. On the contrary, active cameras (Luo et al., 2018; 2019; Zhong et al., 2018a; 2019) such as ones mounted on drones can maneuver proactively against incoming occlusions. Owing to its remarkable flexibility, the active approach has thus attracted overwhelming interest (Tallamraju et al., 2020; Ho et al., 2021; Xu et al., 2017; Kiciroglu et al., 2019; Saini et al., 2022; Cheng et al., 2018; Zhang et al., 2021).
∗Equal Contribution. BCorresponding author. Project Website: https://sites.google.com/view/active3dpose
Previous works have demonstrated the effectiveness of using active cameras for 3D HPE on a single target in indoor (Kiciroglu et al., 2019; Cheng et al., 2018), clean landscapes (Tallamraju et al., 2020; Nägeli et al., 2018; Zhou et al., 2018; Saini et al., 2022) or landscapes with scattered static obstacles (Ho et al., 2021). However, to the best of our knowledge, we have not seen any existing work that experimented with multiple (n > 3) active cameras to conduct 3D HPE in human crowd. There are two key challenges : First, frequent human-to-human interactions lead to random dynamic occlusions. Unlike previous works that only consider clean landscapes or static obstacles, dynamic scenes require frequent adjustments of cameras’ viewpoints for occlusion avoidance while keeping a good overall team formation to ensure accurate multi-view reconstruction. Therefore, achieving optimality in dynamic scenes by implementing a fixed camera formation or a hand-crafted control policy is challenging. In addition, the complex behavioural pattern of a human crowd makes the occlusion patterns less comprehensible and predictable, further increasing the difficulty in control. Second, as the team size grows larger, the multi-agent credit assignment issue becomes prominent which hinders policy learning of the camera agents. Concretely, multi-view 3D HPE as a team effort requires inputs from multiple cameras to generate an accurate reconstruction. Having more camera agents participate in a reconstruction certainly introduces more redundancy, which reduces the susceptibility to reconstruction failure caused by dynamic occlusions. However, it consequently weakens the association between individual performance and the reconstruction accuracy of the team, which leads to the “lazy agent” problem (Sunehag et al., 2017). In this work, we introduce a proactive multi-camera collaboration framework based on multi-agent reinforcement learning (MARL) for real-time distributive adjustments of multi-camera formation for 3D HPE in a human crowd. In our approach, multiple camera agents perform seamless collaboration for successful reconstructions of 3D human poses. Additionally, it is a decentralized framework that offers flexibility over the formation size and eliminates dependency on a control hierarchy or a centralized entity. Regarding the first challenge, we argue that the model’s ability to predict human movements and environmental changes is crucial. Thus, we incorporate World Dynamics Learning (WDL) to train a state representation with these properties, i.e., learning with five auxiliary tasks to predict the target’s position, pedestrians’ positions, self state, teammates’ states, and team reward. To tackle the second challenge, we further introduce the Collaborative Triangulation Contribution Reward (CTCR), which incentivizes each agent according to its characteristic contribution to a 3D reconstruction. Inspired by the Shapley Value (Rapoport, 1970), CTCR computes the average weighted marginal contribution to the 3D reconstruction for any given agent over all possible coalitions that contain it. This reward aims to directly associate agents’ levels of participation with their adjusted return, guiding their policy learning when the team reward alone is insufficient to produce such direct association. Moreover, CTCR penalizes occluded camera agents more efficiently than the shared reward, encouraging emergent occlusion avoidance behaviors. Empirical results show that CTCR can accelerate convergence and increase reconstruction accuracy. Furthermore, CTCR is a general approach that can benefit policy learning in active 3D HPE and serve as a new assessment metric for view selection in other multi-view reconstruction tasks. For the evaluations of the learned policies, we build photo-realistic environments (UnrealPose) using Unreal Engine 4 (UE4) and UnrealCV (Qiu et al., 2017). These environments can simulate realistic-behaving crowds with assurances of high fidelity and customizability. We train the agents on a Blank environment and validate their policies on three unseen scenarios with different landscapes, levels of illumination, human appearances, and various quantities of cameras and humans. The empirical results show that our method can achieve more accurate and stable 3D pose estimates than off-the-shelf passive- and active-camera baselines. To help facilitate more fruitful research on this topic, we release our environments with OpenAI Gym-API (Brockman et al., 2016) integration and together with a dedicated visualization tool. Here we summarize the key contributions of our work: • Formulating the active multi-camera 3D human pose estimation problem as a Dec-POMDP and
proposing a novel multi-camera collaboration framework based on MARL (with n ≥ 3). • Introducing five auxiliary tasks to enhance the model’s ability to learn the dynamics of highly
dynamic scenes. • Proposing CTCR to address the credit assignment problem in MARL and demonstrating notable
improvements in reconstruction accuracy compared to both passive and active baselines. • Contributing high-fidelity environments for simulating realistic-looking human crowds with au-
thentic behaviors, along with visualization software for frame-by-frame video analysis.
2 RELATED WORK
3D Human Pose Estimation (HPE) Recent research on 3D human pose estimation has shown significant progress in recovering poses from single monocular images (Ma et al., 2021; Pavlakos et al., 2017a; Martinez et al., 2017; Kanazawa et al., 2018; Pavlakos et al., 2018; Sun et al., 2018; Ci et al., 2019; Zeng et al., 2020; Ci et al., 2020) or monocular video (Mehta et al., 2017; Hossain & Little, 2018; Pavllo et al., 2019; Kocabas et al., 2020). Other approaches utilize multi-camera systems for triangulation to improve visibility and eliminate ambiguity (Qiu et al., 2019; Jafarian et al., 2019; Pavlakos et al., 2017b; Dong et al., 2019; Lin & Lee, 2021b; Tu et al., 2020; Iskakov et al., 2019). However, these methods are often limited to indoor laboratory environments with fixed cameras. In contrast, our work proposes an active camera system with multiple mobile cameras for outdoor scenes, providing greater flexibility and adaptability.
Proactive Motion Capture Few previous works have studied proactive motion capture with a single mobile camera (Zhou et al., 2018; Cheng et al., 2018; Kiciroglu et al., 2019). In comparison, more works have studied the control of a multi-camera team. Among them, many are based on optimization with various system designs, including marker-based (Nägeli et al., 2018), RGBD-based (Xu et al., 2017), two-stage system (Saini et al., 2019; Tallamraju et al., 2019), hierarchical system (Ho et al., 2021), etc. It is important to note that all the above methods deal with static occlusion sources or clean landscapes. Additionally, the majority of these works adopt hand-crafted optimization objectives and some forms of fixed camera formations. These factors result in poor adaptability to dynamic scenes that are saturated with uncertainties. Recently, RL-based methods have received more attention due to their potential for dynamic formation adjustments. These works have studied active 3D HPE in the Gazebo simulation (Tallamraju et al., 2020) or Panoptic dome (Joo et al., 2015; Pirinen et al., 2019; Gärtner et al., 2020) for active view selection. Among them, AirCapRL (Tallamraju et al., 2020) shares similarities with our work. However, it is restricted to coordinating between two cameras in clean landscapes without occlusions. We study collaborations between multiple cameras (n ≥ 3) and resolve the credit assignment issue with our novel reward design (CTCR). Meanwhile, we study a more challenging scenario with multiple distracting humans serving as sources of dynamic occlusions, which requires more sophisticated algorithms to handle.
Multi-Camera Collaboration Many works in computer vision have studied multi-camera collaboration and designed active camera systems accordingly. Earlier works (Collins et al., 2003; Qureshi & Terzopoulos, 2007; Matsuyama et al., 2012) focused on developing a network of pan-tile-zoom (PTZ) cameras. Owing to recent advances in MARL algorithms (Lowe et al., 2017; Sunehag et al., 2017; Rashid et al., 2018; Wang et al., 2020; Yu et al., 2021; Jin et al., 2022), many works have formulated multi-camera collaboration as a multi-agent learning problem and solved it using MARL algorithms accordingly (Li et al., 2020; Xu et al., 2020; Wenhong et al., 2022; Fang et al., 2022; Sharma et al., 2022; Pan et al., 2022). However, most works focus on the target tracking problem, whereas this work attempts to solve the task of 3D HPE. Compared with the tracking task, 3D HPE has stricter criteria for optimal view selections due to correlations across multiple views, which necessitates intelligent collaboration between cameras. To our best knowledge, this work is the first to experiment with various camera agents (n ≥ 3) to learn multi-camera collaboration strategies for active 3D HPE.
3 PROACTIVE MULTI-CAMERA COLLABORATION
This section will explain the formulation of multi-camera collaboration in 3D HPE as a Dec-POMDP. Then, we will describe our proposed solutions for modelling the virtual environment’s complex dynamics and strengthening credit assignment in the multi-camera collaboration task.
3.1 PROBLEM FORMULATION
We formulate the multi-camera 3D HPE problem as a Decentralized Partially-Observable Markov Decision Process (Dec-POMDP), where each camera is considered as an agent which is decentralizedcontrolled and has partial observability over the environment. Formally, a Dec-POMDP is defined as ⟨S,O,An, n, P, r, γ⟩, where S denotes the global state space of the environment, including all human states and camera states in our problem. oi ∈ O denotes the agent i’s local observation, i.e., the RGB image observed by camera i. A denotes the action space of an agent and An represents the joint action space of all n agents. P : S × An → S is the transition probability function P (st+1|st,at), in which at ∈ An is a joint action by all n agents. At each timestep t, every agent obtains a local view oti from the environment s t and then preprocess oti to form i-th agent’s local
observation õti. The agent performs action a t i ∼ πti(·|õti) and receives its reward r(st,at). γ ∈ (0, 1] is the discount factor used to calculate the cumulative discounted reward G(t) = ∑
t′≥t γ t′−t r(t′).
In a cooperative team, the objective is to learn a group of decentralized policies {πi(ati|õti)} n i=1 that maximizes E(s,a)∼π[G(t)]. For convenience, we denote i as the agent index, JnK = {1, . . . , n} is the set of all n agents, and −i = JnK \ {i} are all n agents except agent i. Observation Camera agents have partial observability over the environment. The pre-processed observation õi = (pi, ξi, ξ−i) of the camera agent i consists of: (1) pi, a set of states of visible humans to agent i, containing information, including the detected human bounding-box in the 2D local view, the 3D positions and orientations of all visible humans measured in both local coordinate frame of camera i and world coordinates; (2) own camera pose ξi showing the camera’s position and orientation in world coordinates; (3) peer cameras poses ξ−i showing their positions and orientations in world coordinates and are obtained via multi-agent communication. Action Space The action space of each camera agent consists of the velocity of 3D egocentric translation (x, y, z) and the velocity of 2D pitch-yaw rotation (θ, ψ). To reduce the exploration space for state-action mapping, the agent’s action space is discretized into three levels across all five dimensions. At each timestep, the camera agent can move its position by [+δ, 0,−δ] in (x, y, z) directions and rotate about pitch-yaw axes by [+η, 0,−η] degrees. In our experiments, the camera’s pitch-yaw angles are controlled by a rule-based system.
3.2 FRAMEWORK
This section will describe the technical framework that constitutes our camera agents, which contains a Perception Module and a Controller Module. The Perception Module maps the original RGB images taken by the camera to numerical observations. The Controller Module takes these numerical observations and produces corresponding control signals. Fig. 2 illustrates this framework. Perception Module The perception module executes a procedure consisting of four sequential stages: (1) 2D HPE. The agent performs 2D human detection and pose estimation on the observed RGB image with the YOLOv3 (Redmon & Farhadi, 2018) detector and the HRNet-w32 (Sun et al., 2019) pose estimator, respectively. Both models are pre-trained on the COCO dataset, (Lin et al., 2014) and their parameters are kept frozen during policy learning of camera agents to ensure crossscene generalization. (2) Person ReID. A ReID model (Zhong et al., 2018c) is used to distinguish people in a scene. For simplicity, an appearance dictionary of all to-be-appeared people is built in advance following (Gärtner et al., 2020). At test time, the ReID network computes features for all detected people and identifies different people by comparing features to the pre-built appearance dictionary. (3) Multi-agent Communication. Detected 2D human pose, IDs, and own camera pose are broadcasted to other agents. (4) 3D HPE. 3D human pose is reconstructed via local triangulation after receiving communications from other agents. The estimated position and orientation of a person can then be extracted from the corresponding reconstructed human pose. The communication process is illustrated in Appendix Fig. 9.
Controller Module The controller module consists of a state encoder E and an actor network A. The state encoder E takes õti as input, encoding the temporal dynamics of the environment via LSTM. The future states of the target, pedestrians, and cameras are modelled using Mixture Density Network (MDN) (Bishop, 1994) to account for uncertainty. During model inference, it computes target position prediction, and then the (ϕ, µ, σ) parameters of target prediction MDN are used as a part of the inputs to the actor network to enhance feature encoding. Please refer to Section 3.4 for more details regarding training the MDN.
Feature Embedding zti = MLP(õ t i), (1)
Temporal Modeling hti = LSTM(z t i , h t−1 i ), (2)
Human Trajectory Prediction p̂t+1tgt/pd = MDN(z t i , h t i, p t tgt/pd), (3) Final Embedding eti = E ( õti, h t−1 i ) = Concat(z t i , h t i, {(ϕ, µ, σ)}MDNtgt ) , (4)
where ptgt and ppd refer to the state of the target and the pedestrian, respectively. The actor network A consists of 2 fully-connected layers that output the action, ati = A(E(õ t i, h t−1 i )).
3.3 REWARD STRUCTURE
To alleviate the credit assignment issue that arises in multi-camera collaboration, we propose the Collaborative Triangulation Contribution Reward (CTCR). We start by defining a base reward that reflects the reconstruction accuracy of the triangulated pose generated by the camera team. Then we explain how our CTCR is computed based on this base team reward.
Reconstruction Accuracy as a Team Reward To directly reflect the reconstruction accuracy, the reward function negatively correlates with the pose estimation error (Mean Per Joint Position Error, MPJPE) of the multi-camera triangulation. Formally,
r(X) = { 0, |X| ≤ 1, 1−Gemen (MPJPE(X)), |X| ≥ 2. (5)
Where the set X represents the participating cameras in triangulation, and employs the GemanMcClure smoothing function, Gemen(·) = 2(·/c) 2
(·/c)2+4 , to stabilize policy updates, where c = 50mm in our experiments. However, the shared team reward structure in our MAPPO baseline, where each camera in the entire camera team X receives a common reward r(X), presents a credit assignment challenge, especially when a camera is occluded, resulting in a reduced reward for all cameras. To address this issue, we propose a new approach called Collaborative Triangulation Contribution Reward (CTCR).
Collaborative Triangulation Contribution Reward (CTCR) CTCR computes each agent’s individual reward based on its marginal contribution to the collaborative multi-view triangulation. Refer to Fig. 3 for a rundown of computing CTCR for a 3-cameras team. The contribution of agent i can be measured by:
CTCR(i) = n · φr(i), φr(i) = ∑
S⊆JnK\{i}
|S|!(n− |S| − 1)! n! [r(S ∪ {i})− r(S)], (6)
Where n denotes the total number of agents. S denotes all the subsets of JnK not containing agent i. |S|!(n−|S|−1)!n! is the normalization term. [r(S ∪ {i}) − r(S)] means the marginal contribution of agent i. Note that ∑ i∈JnK φr(i) = r(JnK). We additionally multiply a constant n to rescale the CTCR to have the same scale as the team reward. Especially in the 2-cameras case, the individual CTCR should be equivalent to the team reward, i.e., CTCR(i = 1) = CTCR(i = 2) = r({1, 2}). For more explanations on CTCR, please refer to Appendix Section G.
3.4 LEARNING MULTI-CAMERA COLLABORATION VIA MARL
We employ the multi-agent learning variant of PPO (Schulman et al., 2017) called Multi-Agent PPO (MAPPO) (Yu et al., 2021) to learn the collaboration strategy. Alongside the RL loss, we jointly train the model with five auxiliary tasks that encourage comprehension of the world dynamics and the stochasticity in human behaviours. The pseudocode can be found in Appendix A.
World Dynamics Learning (WDL) We use the encoder’s hidden states (zti , hti) as the basis to model the world. Three WDL objectives correspond to modelling agent dynamics : (1) learning the forward dynamics of the camera P1(ξt+1i |zti , hti, ati), (2) prediction of team reward P2(rt|zti , hti, ati), (3) prediction of future position of peer agents P3(ξt+1−i |zti , hti, ati). Two WDL objectives correspond to modelling human dynamics: (4) prediction of future position of target person P4(pt+1tgt |zti , hti, pttgt), (5) prediction of future position of pedestrians, P5(pt+1pd |zti , hti, ptpd). All the probability functions above are approximated using Mixture Density Networks (MDNs) (Bishop, 1994).
Total Training Objectives LTrain = LRL + λWDL LWDL. The LRL is the reinforcement learning loss consisting of PPO-Clipped loss and centralized-critic network loss similar to MAPPO (Yu et al., 2021). LWDL = − 1n ∑ l λl ∑ i E[logPl(·|õti, hti, ati)] is the world dynamics learning loss that consists of MDN supervised losses on the five prediction tasks mentioned above.
4 EXPERIMENT
In this section, we first introduce our novel environment, UNREALPOSE, used for training and testing the learned policies. Then we compare our method with multi-passive-camera baselines and perform an ablation study on the effectiveness of the proposed CTCR and WDL objectives. Additionally, we evaluate the effectiveness of the learned policies by comparing them against other active multi-camera methods. Lastly, we test our method in four different scenarios to showcase its robustness.
4.1 UNREALPOSE: A VIRTUAL ENVIRONMENT FOR PROACTIVE HUMAN POSE ESTIMATION
We built four virtual environments for simulating active HPE in the wild using Unreal Engine 4 (UE4), which is a powerful 3D game engine that can provide real-time and photo-realistic renderings for making visually-stunning video games. The environments handle the interactions between realisticbehaving human crowds and camera agents. Here is a list of characteristics of UNREALPOSE that we would like to highlight: Realistic: Diverse generations of human trajectories, built-in collision avoidance, and several scenarios with different human appearance, terrain, and level of illumination. Flexibility: extensive configuration in numbers of humans, cameras, or their physical properties, more than 100 MoCap action sequences incorporated. RL-Ready: integrated with OpenAI Gym API, overhaul the communication module in the UnrealCV (Qiu et al., 2017) plugin with inter-process communication (IPC) mechanism. For more detailed descriptions, please refer to Appendix Section B.
4.2 EVALUATION METRICS
We use Mean Per Joint Position Error (MPJPE) as our primary evaluation metric, which measures the difference between the ground truth and the reconstructed 3D pose on a per-frame basis. However, using MPJPE alone may not provide a complete understanding of the robustness of a multi-camera collaboration policy for two reasons: Firstly, cameras adjusting their perception quality may take multiple frames to complete, and secondly, high peaks in MPJPE may be missed by the mean aggregation. To address this, we introduce the “success rate” metric, which evaluates the smooth execution and robustness of the learned policies. Success rate is calculated as the ratio of frames in an episode with MPJPE lower than τ . Formally, SuccessRate(τ) = P (MPJPE ≤ τ). This metric is a temporal measure that reflects the integrity of multi-view coordination. Poor coordination may cause partial occlusions or too many overlapping perceptions, leading to a significant increase in MPJPE and a subsequent decrease in the success rate.
4.3 RESULTS AND ANALYSIS
The learning-based control policies were trained in a total of 28 instances of the BlankEnv, where each instance uniformly contained 1 to 6 humans. Each training run consisted of 700,000 steps, which corresponds to 1,000 training iterations. To ensure a fair evaluation, we report the mean metrics based on the data from the latest 100 episodes, each comprising 500 steps. The experiments were conducted in a 10m× 10m area, where the cameras and humans interacted with each other.
Active vs. Passive To show the necessity of proactive camera control, we compare the active camera control methods with three passive methods, i.e., Fixed Camera, Fixed Camera (RANSAC), and Fixed Camera (PlaneSweepPose). “Fixed Cameras” denotes that the poses of the cameras are fixed, hanging 3m above ground and −35◦ camera pitch angles. The placements of these fixed cameras are carefully determined with strong priors, e.g., right-angle, triangle, square, and pentagon formations for 2, 3, 4, and 5 cameras, respectively. “RANSAC” denotes the method that uses RANSAC (Fischler & Bolles, 1981) for enhanced triangulation. “PlaneSweepPose” represents the off-the-shelf learning-based method (Lin & Lee, 2021b) for multi-view 3D HPE. Please refer to Appendix E.2 for more implementation details. We show the MPJPE and Success Rate versus a
different number of cameras in Fig. 5. We observe that all passive baselines are being outperformed by the active approaches due to their inability to adjust camera views against dynamic occlusions. The improvement of active approaches is especially significant when the number of cameras is less, i.e. when the camera system has little or no redundancy against occlusions. Notably, the MPJPE attained by our 3-camera policy is even lower than the MPJPE from 5 Fixed cameras. This suggests that proactive strategies can help reduce the number of cameras necessary for deployment.
The Effectiveness of CTCR and WDL We also perform ablation studies on the two proposed modules (CTCR and WDL) to analyze their benefits to performance. We take “MAPPO” as the active-camera baseline for comparison, which is our method but trained instead by a shared global reconstruction reward and without world dynamics modelling. Fig. 5 shows a consistent performance gap between the “MAPPO” baseline and our methods (MAPPO + CTCR + WDL). The proposed CTCR mitigates the credit assignment issue by computing the weighted marginal contribution of each camera. Also, CTCR promotes faster convergence. Training curves are shown in Appendix Fig. 13. Training with WDL objectives further improves the MPJPE metric for our 2-Camera model. However, its supporting effect gradually weakens with the increasing number of cameras. We argue that is caused by the more complex dynamics involved with more cameras simultaneously interacting in the same environment. Notably, we observe that the agents trained with WDL are of better generalization in unseen scenes, as shown in Fig. 6.
Versus Other Active Methods To show the effectiveness of the learned policies, we further compare our method with other active multi-camera formation control methods in 3-cameras BlankEnv. “MAPPO” (Yu et al., 2021) and AirCapRL (Tallamraju et al., 2020) are two learning-based methods based on PPO (Schulman et al., 2017). The main difference between these two methods is the reward shaping technique, i.e., AirCapRL additionally employs multiple carefully-designed rewards (Tallamraju et al., 2020) for learning. We also programmed a rule-based fixed formation control method (keeping an equilateral triangle, spread by 120◦) to track the target person. Results are shown in Table 1. Interestingly, these three baselines achieve comparable performance. Our method outperforms them, indicating a more effective multi-camera collaboration strategy for 3D HPE. For example, our method learns a spatially spread-out formation while automatically adjusting to avoid impending occlusion.
Generalize to Various Scenarios We train the control policies in BlankEnv while testing them in three other realistic environments (SchoolGym, UrbanStreet, and Wilderness) to evaluate their generalizability to unseen scenarios. Fig. 6 shows that our method consistently outperforms baseline
methods with lower variance in MPJPE during the evaluations in three test environments. We report the results in the BlankEnv as a reference.
Qualitative Analysis In Figure 7, we show six examples of the emergent formations of cameras under the trained policies using the proposed methods (CTCR + WDL). The camera agents learn to spread out and ascent above humans to avoid occlusions and collision. Their placements in an emergent formation are not assigned by other entities but rather determined by the decentralized control policies themselves based on local observations and agent-to-agent communication. For more vivid examples of emergent formations, please refer to the project website for the demo videos.1 For more analysis on the behaviour mode of our 3-, 4- and 5-camera models, please refer to Appendix Section H.
5 CONCLUSION AND DISCUSSION
To our knowledge, this paper presents the first proactive multi-camera system targeting the 3D reconstruction in a dynamic crowd. It is also the first study regarding proactive 3D HPE that promptly experimented with multi-camera collaborations at different scales and scenarios. We propose CTCR to alleviate the multi-agent credit assignment issue when the camera team scales up. We identified multiple auxiliary tasks that improve the representation learning of complex dynamics. As a final note, we release our virtual environments and the visualization tool to facilitate future research.
Limitations and Future Directions Admittedly, a couple of aspects of this work have room for improvement. Firstly, the camera agents received the positions for their teammates via non-disrupting broadcasts of information, which may be prone to packet losses during deployment. One idea is to incorporate a specialized protocol for multi-agent communication into our pipeline, such as ToM2C(Wang et al., 2022). Secondly, intended initially to reduce the communication bandwidth (for example, to eliminate the need for image transmission between cameras), the current pipeline comprises a Human Re-Identification module requiring a pre-scan memory on all to-be-appeared human subjects. For some out-of-distribution humans’ appearances, the current ReID module may not be able to recognize. Though ACTIVE3DPOSE can accommodate a more sophisticated ReID module (Deng et al., 2018) to resolve this shortcoming. Thirdly, the camera control policy requires accurate camera pose, which may need to develop a robust SLAM system (Schmuck & Chli, 2017; Zhong et al., 2018b) to work in dynamic environments with multiple cameras. Fourthly, the motion patterns of targets in virtual environments are based on manually designed animations, which will lead to the poor generalization of the agents on unseen motion patterns. In the future, we can enrich the diversity by incorporating a cooperative-competitive multi-agent game (Zhong et al., 2021) in training. Lastly, we assume near-perfect calibrations for a group of mobile cameras, which might be complicated to sustain in practice. Fortunately, we are seeing rising interest in parameter-free pose estimation (Gordon et al., 2022; Ci et al., 2022), which does not require online camera calibration and may help to resolve this limitation.
1Project Website for demo videos: https://sites.google.com/view/active3dpose
6 ETHICS STATEMENT
Our research into active 3D HPE technologies has the potential to bring many benefits, such as biomechanical analysis in sports and automated video-assisted coaching (AVAC). However, we recognize that these technologies can also be misused for repressive surveillance, leading to privacy infringements and human rights violations. We firmly condemn such malicious acts and advocate for the fair and responsible use of our virtual environment, UNREALPOSE, and all other 3D HPE technologies.
ACKNOWLEDGEMENT
The authors would like to thank Yuanfei Wang for discussions on world models; Tingyun Yan for his technical support on the first prototype of UNREALPOSE. This research was supported by MOST-2022ZD0114900, NSFC-62061136001, China National Post-doctoral Program for Innovative Talents (Grant No. BX2021008) and Qualcomm University Research Grant.
A TRAINING ALGORITHM PSUEDOCODE
Algorithm 1 Learning Multi-Camera Collaboration (CTCR + WDL) 1: Initialize: n agents with a tied-weights MAPPO policy π and mixture density models (MDN) for WDL
prediction models {(Pself, Preward, Ppeer, Ptgt, Ppd)}π , E parallel environment rollouts 2: for Iteration = 1, 2, . . .M do 3: In each E environment rollouts, each agent i ∈ JnK collects trajectory with length T :
τ = [ (õti, õ t+1 i , a t i, r t i , h t−1 i , s t,at−1, rt) ]T t=1
4: Substitute individual reward of each agent rt with CTCR: rti ← CTCR(i) (equation 6) 5: For each step in τ , compute advantage estimates Â1i , . . . , Â T i with GAE (Schulman et al., 2015) for
each agent i ∈ JnK 6: Yield a training batch D of the size E × T × n 7: for Mini-batch SGD Epoch = 1, 2, . . . ,K do 8: Sample a stochastic mini-batch of size B form D, which B = |D|/K 9: Compute zti and h t i using the encoder model Eπ
10: Compute PPO-CLIP objective loss LPPO, global critic value loss LValue, adaptive KL loss LKL 11: Compute the objectives that constitutes LWDL:
• Self-State Prediction Loss Lself = −Eτ [logP (ξt+1i |z t i , h t i, a t i)] • Reward Prediction Loss Lreward = −Eτ [logP (rt|zti , hti, ati)] • Peer-State Prediction Loss Lpeer = −Eτ [logP (ξt+1−i |z t i , h t i, a t i)] • Target Prediction Loss Ltgt = −Eτ [logP (pt+1tgt |zti , hti, pttgt)] • Pedestrians Prediction Loss Lpd = −Eτ [logP (pt+1pd |z t i , h t i, p t pd)]
12: LTrain = λPPOLPPO + λValueLValue + βKLLKL + λWDLLWDL 13: Optimize LTrain w.r.t to the current policy parameter θπ
B UNREALPOSE: ACCESSORIES AND MISCELLANEOUS ITEMS
Our UnrealPose virtual environment supports different active vision tasks, such as active human pose estimation and active tracking. This environment also supports various settings ranging from single-target single-camera settings to multi-target multi-camera settings. Here we provide a more detailed description of the three key characteristics of UnrealPose:
Realistic The built-in navigation system governs the collision avoidance movements of virtual humans against dynamic obstacles. In the meantime, it also ensures diverse generations of walking trajectories. These features enable the users to simulate a realistic-looking crowd exhibiting socially acceptable behaviors. We have also provided several pre-set scenes, e.g., school gym, wilderness, urban crossing, and etc. These scenes have notable differences in illuminations, terrains, and crowd appearances to reflect the dramatically different-looking scenarios in real-life.
Extensive Configuration The environment can be configured with different numbers of humans and cameras and swapped across other scenarios with ease, which we demonstrated in Fig 4(a-d). Besides simulating walking human crowds, the environment incorporates over 100 Mocap action sequences with smooth animation interpolations to enrich the data variety for other MoCap tasks.
RL-Ready We use the UnrealCV (Qiu et al., 2017) plugin as the medium to acquire images and annotations from the environment. The original UnrealCV plugin suffers from unstable data transfer and unexpected disconnections under high CPU workloads. To ensure fast and reliable data acquisition for large-scale MARL experiments, we overhauled the communication module in the UnrealCV plugin with inter-process communication (IPC) mechanism, which eliminates the aforementioned instabilities.
B.1 VISUALIZATION TOOL
We provide a visualization tool to facilitate per-frame analysis of the learned policy and reconstruction results (shown in Fig. 8). The main interface consists of four parts: (1) Live 2D views from all cameras. (2) 3D spatial view of camera positions and reconstructions. (3) Plot of statistics. (4) Frame control bar. This visualization tool supports different numbers of humans and cameras. Meanwhile, it is written in Python to support easy customization.
B.2 LICENSE
All assets used in the environment are commercially-available and obtained from the UE4 Marketplace. The environment and tools developed in this work are licensed under Apache License 2.0.
C OBSERVATION PROCESSING
Fig.9 shows the pipeline of the observation processing. Each camera observes an RGB image and detects the 2D human poses and IDs via the Perception Module described in the main paper. The camera pose, 2D human poses and IDs are then broadcast to other cameras for multi-view 3D triangulation. The human position is calculated as the median of the reconstructed joints. Human orientation is calculated from the cross product of the reconstructed shoulder and spine.
D IMPLEMENTATION DETAILS
D.1 TRAINING DETAILS
All control policies are trained in the BlackEnv scene. At the testing stage, we apply zeroshot transfer for the learned policies to three realistic scenes: SchoolGym, UrbanStreet, and Wilderness.
To simulate a dynamic human crowd that performs high random behaviors, we sample arbitrary goals for each human and employ the built-in navigation system to generate collision-free trajectories. Each human walks at a random speed. To ensure the generalization across a different number of humans, we train our RL policy with a mixture of environments of 1 to 6 humans. The learning rate is set to 5× 10−4 with scheduled decay during the training phase. The annealing schedule for the learning rate is detailed in Table. 2. The maximum episode length is 500 steps, discounted factor γ is 0.99 and the GAE horizon is 25 steps. Each sampling iteration produces a training batch with the size of 700 steps, then we perform 16 iterations on every training batch with 2 SGD mini-batch updates for each iteration (i.e. SGD batch size = 350).
Table 2 shows the common training hyper-parameters shared between the baseline models (MAPPO) and all of our methods. Table. 4 shows the hyperparameters for the WDL module.
D.2 DIMENSIONS OF FEATURE TENSORS IN THE CONTROLLER MODULE
Table 3 serves as a complementary description for Eqn. 1, 2, 3, 4. The table shows the dimensions of the feature tensors used in the controller module.“B” denotes the batch size. In the current model design, the dimension of local observation is adjusted based on the maximum number of camera agents (N_cammax) and the maximum number of observable humans (N_humanmax) in an environment. In our experiments, N_humanmax has been set to 7. The observation pre-processor will zero-pad each observation to a length equal N_humanmax × 18 if the current environment
instance has less than N_humanmax humans. “9” and “18” correspond to the feature length for a camera and a human, respectively. The MDN of the Target Prediction module has 16 Gaussian components, in which each component outputs (ϕ, µx, σx, µy, σy) and ϕ is the weight parameter of a component. The current implementation of MDN only predicts the x-y location of a human, which is a simplification since the z coordinate of a simulated human barely changes across an episode compared to the x and y coordinates. The dimension of an MDN output has a length of 80 and the exact prediction is produced by ϕ-weighted averaging. In Eqn. 4, {(ϕ, µ, σ)}MDNtgt is an encoded feature produced by passing the MDN output to a 2-layer MLP that has an output dimension of 128.
D.3 COMPUTATIONAL RESOURCES
We used 15 Ray workers for each experiment to ensure consistency in the training procedure. Each worker carries a Gym vectorized environment consisting of 4 actual environment instances. Each worker demands approximately 3.7GB of VRAM. We run each experiment with 8 NVIDIA RTX 2080 Ti GPUs. Depending on the number of camera agents, the total training hours required for an experiment to run 500k steps will vary between 4 to 12 hours.
D.4 TRAINING CURVE
Fig. 13 shows the training curves of our method and the baseline method. We can find that our methods converge faster and improve reconstruction accuracy than the baseline method.
D.5 TOTAL INFERENCE TIME
Our solution can run in real-time. Table 5 reports the inference time for each module of the proposed Active3DPose pipeline.
E ADDITIONAL EXPERIMENT RESULTS
E.1 ABLATION STUDY ON WDL OBJECTIVES
We perform a detailed ablation study regarding the effect of each WDL sub-task on the model performance. As shown in Fig. 11, we can observe that the MPJPE metric gradually decreases as we incorporate more WDL losses. This aligns with our assumptions that training the model with world dynamics learning objectives will promote the model’s ability to capture a better representation of future states, which in turn increases performance. Our method additionally demonstrates the importance of incorporating information regarding the target’s future state into the encoder’s output features. Predicting the target’s future states should not only be used as an auxiliary task but should also directly influence the inference process of the actor model.
E.2 BASELINE — FIXED-CAMERAS
In addition to the triangulation and RANSAC baselines introduced in the main text, we compare and elaborate on two more baselines that use fixed cameras: (1) fixed cameras with RANSAC-based triangulation and temporal smoothing (TS) (2) an off-shelf 3D pose estimator PlaneSweepPose (Lin & Lee, 2021a).
In the temporal smoothing baseline, we applied a low-pass filter (Casiez et al., 2012) and temporal fusion where the algorithm will fill in any missing key points in the current frame with the detected key points from the last frame.
In the PlaneSweep baseline, as per the official instructions, we train three separate models (3 cams to 5 cams) with the same camera setup as in our testing scenarios. We have tested the trained models in different scenarios and reported the MPJPE results in Table 6, 7, 8 and 9. Note that, this off-shelf pose estimator performs better than the Fixed-Camera Baseline (Triangulation) but still underperforms compared to our active method. Fig. 12 illustrates the formations of the fixed camera baselines.
Camera placements for all fixed-camera baselines are shown in Fig. 12. These formations are carefully designed not to disadvantage the fixed-camera baselines on purpose. Especially for the 5-cameras pentagon formation, which helps the fixed-camera baseline to obtain satisfactory performance on the 5-camera setting as shown in Tables 6, 7, 8 and 9.
E.3 OUR METHOD ENHANCED WITH RANSAC-BASED TRIANGULATION
RANSAC is a generic technique that can be used to improve triangulation performance. In this experiment, we also train and test our model with RANSAC. The final result (Table 11) shows a further improvement on our original triangulation version.
F GENERATING SAFE AND SMOOTH TRAJECTORIES
F.1 COLLISION AVOIDANCE
In order to generate safe trajectories, in this section, we introduce and evaluate two different ways to enforce collision avoidance between cameras and humans.
Obstacle Collision Avoidance (OCA) OCA resembles a feed-forward PID controller on the final control outputs before execution. Concretely, OCA adds a constant reverse quantity to the control output if it detects any surrounding objects within its safety range. This “detouring” mechanism safeguards the cameras from possible collisions and prevents them from making dangerous maneuvers.
Action-Masking (AM) AM also resembles a feed-forward controller but is instead embedded into the forward step of the deep-learning model. At each step, AM module first identifies the dangerous actions among all possible actions, then modifies the probabilities (output by the policy model) of choosing the hazardous actions to be zero so that the learning-based control policy will never pick them. Note that AM must be trained with the MARL policy model.
We proposed the minimum Camera-Human distance (in which the scope includes the target and the pedestrians) as the safety metric. It measures the distance between the closest human and the camera at a timestep. Fig. 14 shows the histograms of Min Camera-Human distance sampled over five episodes of 500 steps.
F.2 TRAJECTORY SMOOTHING
Exponential Moving Average (EMA) To generate smooth trajectories, we introduce EMA to smooth the outputs of our learned policy model. EMA is a common technique used in smoothing time-series data. In our case, the smoothing operator is defined as :
ât = ât−1 + η · (at − ât−1)
Where at is the action (or control signal) output by the model at the current step, ât−1 is the smoothed action from the last step and η is the smoothing factor. A smaller η results in greater smoothness. ât is the smoothed action that the camera will execute.
F.3 ROBUSTNESS OF THE LEARNED POLICY
In this section, we evaluate the robustness of our model on different external perturbations to the control signal. In conclusion, our model shows resilience against the delay effect and random noise.
Delay EMA also brings a delay effect while smoothing the generated trajectory. The level of smoothing positively correlates with a larger delay factor. Here, we evaluate our model’s robustness to the EMA simulated delay.
Random Action Noise Control devices in real-life are inevitably affected by errors. For example, the control signal of a controller may be over-damped or may overshoot. We simulated this type of random error by multiplying the output action by a uniformly-sampled noise.
In Table 12, we intend to observe the effects of EMA delay and random action noise on the reconstruction accuracy of our model, marked as “Vanilla”.
G MORE EXPLANATIONS ON CTCR
Figure 3 is an example of using Eq. 6 to compute CTCR for each of the three cameras. The CTCR is incentivized by the Shapley Value.The main idea is that the overall optimality needs to also account for the optimality of every possible sub-formation. For a camera agent to receive the highest CTCR possible, its current position and view must be optimal both in terms of its current formation and any sub-formation possible.
Note: a group of collaborating players is often referred to as a “coalition” in other literatures. Here we apply the same concept but to a group of cameras, so we used the more initiative term “formation” instead.
Eq. 6 can be further breakdown as follows:
φr(i) = ∑
S⊆JnK\{i}
|S|!(n− |S| − 1)! n! [r(S ∪ {i})− r(S)]
= 1
n ∑ S⊆JnK\{i} |S|!(n− 1− |S|)! (n− 1)! [r(S ∪ {i})− r(S)]
= 1
n ∑ S⊆JnK\{i} ( n− 1 |S| )−1 [r(S ∪ {i})− r(S)]
where JnK = 1, 2, . . . , n denotes the set of all cameras and the binomial coefficient ( k n ) =
n! k!(n−k)! , 0 ≤ k ≤ n. S denotes a formation (a subset) without camera i. ( n−1 |S| ) is the number of combinations of subset S (i.e., the binomial coefficient), which serves as a normalization term. r(S) computes the reconstruction accuracy of the formation S. And [r(S ∪ {i})− r(S)] computes the marginal improvement after adding the camera i to sub-formation S. So this equation means we iterate over all possible S and compute the marginal contribution of camera i and average over all possible combinations of (S, i).
Suppose we have a 3-cameras formation, similarly shown in Figure 3. So n = 3, the number of cameras. Let’s name these cameras (1,2,3), and let’s say we only care about Camera 1 for now. Since we are computing the average marginal contribution for Camera 1, we are looking for those formations that do not have Camera 1 because we want to see how much of an increase in performance resulted from the addition of Camera 1 to those formations. For all possible formations denoted by the set JnK, four formations satisfy this condition, S ⊆ JnK \ {i} −→ S ∈ (∅, {2}, {3}, {2, 3}). The binomial coefficient ( n−1 |S| ) for a 2-camera sub-formation in a 3-cameras case is ( 2 2 ) , which makes sense because there exists only one unique combination that does not contain Camera 1, which is sub-formation {2, 3}. r({2, 3}) computes the reconstruction accuracy of the formation {2, 3} and r({2, 3} ∪ {1}) computes the reconstruction accuracy after adding Camera 1 to sub-formation {2, 3}. Their difference gives us the marginal contribution of Camera 1. As we sum over all subsets S of JnK not containing Camera 1, and then divide by ( n−1 |S| ) and the number of cameras n, we have the average marginal contribution of Camera 1 (φr({1})) to the collaborative triangulated reconstruction. Further multiplying this term by n, you have the CTCR1 as shown in Eq. 6.
H ANALYSIS ON MODES OF BEHAVIORS OF THE TRAINED AGENTS
Here in Figure 15 we provide statistics and analysis on the behaviour mode of the agents controlled by our 3-, 4- and 5-camera policies, respectively. We are interested in understanding the characteristics of the emergent formations learned by our model. Hence we proposed three quantitative measures to understand the topology of the emergent formations: (1) the min-angle between the cameras’ orientation and the per-frame-mean of the min-camera angle, (2) the camera’s pitch angle, and (3) the camera-human distance. Hence we provide rigorous definitions as follows:
min-camera angle(i) = min j ̸=i
⟨axis of camera i, axis of camera j⟩
per-frame-mean of min-camera angle = 1
n ∑ i∈[n] min-camera angle(i)
In simpler terms, min-camera angle(i) finds the minimum angle between camera i and any other camera j. “per-frame-mean of min-camera angle” is the mean of min-camera angle(i) for all camera i in one frame. A positive camera’s pitch angle means looking upward, and a negative camera’s pitch angle means looking downward. The camera-human distance measures the distance between the target human and the given camera.
Regarding the distance between cameras and humans, the camera agents actively adjust their position relative to the target human. The cameras learn to keep a camera-human distance between 2m and 3m. It is neither too distant from the target human nor too close, violating the safety constraint. The camera agents surround the target at an appropriate distance to have a better resolution on the 2D views. In the meantime, as the safe-distance constraint is enforced during training, the camera agents are prohibited from aggressively minimizing the camera-human distance.
The histograms for the camera’s pitch angle suggest that the cameras mostly maintain negative pitch angles. Their preferred strategy is to hover over the humans and capture images at a higher altitude. This is likely because of emergent occlusion avoidance. The cameras also emerge to fly at an even level with the humans (where the pitch angle approximately equals 0) to capture more accurate 2D poses of the target human. This propensity is apparent from the peaks at x = 0 angles in the histograms. A relatively wide distribution of the pitch angle histogram suggests that the camera formation is spread out in the 3D space and dynamic adjustments of flying heights and pitch angles by the camera agents.
For the average angle between the cameras’ orientations, this statistic shows that the cameras in various formations will maintain reasonable non-zero spatial angles between each other. Therefore, their camera views are less likely to coincide and provide more diverse perspectives to generate a more reliable 3D triangulation. | 1. What is the focus and contribution of the paper on 3D human pose estimation?
2. What are the strengths of the proposed approach, particularly in terms of multi-agent reinforcement learning and collaboration?
3. What are the weaknesses of the paper, especially regarding the reliance on human recognition modules and practicality?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The paper introduces a multi-agent reinforcement learning approach with a Collaborative Triangulation contribution the credit assignment by using their weighted average marginal contribution for 3D human pose estimation in dynamic human crowds. The model is trained with multiple world dynamics learning tasks. The method is evaluated in four photo-realistic UE4 environments. The results show the proposed method outperforms the fixed and active baselines in different scenarios with various numbers of cameras and humans.
Strengths And Weaknesses
Strength:
The proposed method novel in using multiple (>3) active cameras to perform 3D HPE in human crowd.
The proposed method introduces a decentralized framework via multi-agent reinforcement learning for multi-camera collaboration at different scales and scenarios (in simulated enviromnment).
Weakness:
The proposed approach relies on human recognition module to distinguish people in a scene. For some out-of-distribution human appearances, the current ReID module may not work, which will then hinder HPE task.
The practicality of the algorithm seems limited. To perform 3D human pose estimation, dynamic cameras seem quite difficult to deploy in real-time.
Clarity, Quality, Novelty And Reproducibility
Clarity:
The paper is well written.
Quality:
The paper is technically sound, and the effectiveness of the method is empirically demonstrated on various scenarios.
Novelty:
The proposed method is the first one to use multiple dynamic cameras for HPE via multi-agent reinforcement learning framework.
Reproducibility:
The source code of the environments used in the experiments are provided. |
ICLR | Title
Proactive Multi-Camera Collaboration for 3D Human Pose Estimation
Abstract
This paper presents a multi-agent reinforcement learning (MARL) scheme for proactive Multi-Camera Collaboration in 3D Human Pose Estimation in dynamic human crowds. Traditional fixed-viewpoint multi-camera solutions for human motion capture (MoCap) are limited in capture space and susceptible to dynamic occlusions. Active camera approaches proactively control camera poses to find optimal viewpoints for 3D reconstruction. However, current methods still face challenges with credit assignment and environment dynamics. To address these issues, our proposed method introduces a novel Collaborative Triangulation Contribution Reward (CTCR) that improves convergence and alleviates multi-agent credit assignment issues resulting from using 3D reconstruction accuracy as the shared reward. Additionally, we jointly train our model with multiple world dynamics learning tasks to better capture environment dynamics and encourage anticipatory behaviors for occlusion avoidance. We evaluate our proposed method in four photo-realistic UE4 environments to ensure validity and generalizability. Empirical results show that our method outperforms fixed and active baselines in various scenarios with different numbers of cameras and humans. (a) Dynamic occlusions lead to failed reconstruction (b) Constrained MoCap area Active MoCap in the wild Figure 1: Left: Two critical challenges in fixed camera approaches. Right: Three active cameras collaborate to best reconstruct the 3D pose of the target (marked in ).
1 INTRODUCTION
Marker-less motion capture (MoCap) has broad applications in many areas such as cinematography, medical research, virtual reality (VR), sports, and etc. Their successes can be partly attributed to recent developments in 3D Human pose estimation (HPE) techniques (Tu et al., 2020; Iskakov et al., 2019; Jafarian et al., 2019; Pavlakos et al., 2017b; Lin & Lee, 2021b). A straightforward implementation to solve multi-views 3D HPE is to use fixed cameras. Although being a convenient solution, it is less effective against dynamic occlusions. Moreover, fixed camera solutions confine tracking targets within a constrained space, therefore less applicable to outdoor MoCap. On the contrary, active cameras (Luo et al., 2018; 2019; Zhong et al., 2018a; 2019) such as ones mounted on drones can maneuver proactively against incoming occlusions. Owing to its remarkable flexibility, the active approach has thus attracted overwhelming interest (Tallamraju et al., 2020; Ho et al., 2021; Xu et al., 2017; Kiciroglu et al., 2019; Saini et al., 2022; Cheng et al., 2018; Zhang et al., 2021).
∗Equal Contribution. BCorresponding author. Project Website: https://sites.google.com/view/active3dpose
Previous works have demonstrated the effectiveness of using active cameras for 3D HPE on a single target in indoor (Kiciroglu et al., 2019; Cheng et al., 2018), clean landscapes (Tallamraju et al., 2020; Nägeli et al., 2018; Zhou et al., 2018; Saini et al., 2022) or landscapes with scattered static obstacles (Ho et al., 2021). However, to the best of our knowledge, we have not seen any existing work that experimented with multiple (n > 3) active cameras to conduct 3D HPE in human crowd. There are two key challenges : First, frequent human-to-human interactions lead to random dynamic occlusions. Unlike previous works that only consider clean landscapes or static obstacles, dynamic scenes require frequent adjustments of cameras’ viewpoints for occlusion avoidance while keeping a good overall team formation to ensure accurate multi-view reconstruction. Therefore, achieving optimality in dynamic scenes by implementing a fixed camera formation or a hand-crafted control policy is challenging. In addition, the complex behavioural pattern of a human crowd makes the occlusion patterns less comprehensible and predictable, further increasing the difficulty in control. Second, as the team size grows larger, the multi-agent credit assignment issue becomes prominent which hinders policy learning of the camera agents. Concretely, multi-view 3D HPE as a team effort requires inputs from multiple cameras to generate an accurate reconstruction. Having more camera agents participate in a reconstruction certainly introduces more redundancy, which reduces the susceptibility to reconstruction failure caused by dynamic occlusions. However, it consequently weakens the association between individual performance and the reconstruction accuracy of the team, which leads to the “lazy agent” problem (Sunehag et al., 2017). In this work, we introduce a proactive multi-camera collaboration framework based on multi-agent reinforcement learning (MARL) for real-time distributive adjustments of multi-camera formation for 3D HPE in a human crowd. In our approach, multiple camera agents perform seamless collaboration for successful reconstructions of 3D human poses. Additionally, it is a decentralized framework that offers flexibility over the formation size and eliminates dependency on a control hierarchy or a centralized entity. Regarding the first challenge, we argue that the model’s ability to predict human movements and environmental changes is crucial. Thus, we incorporate World Dynamics Learning (WDL) to train a state representation with these properties, i.e., learning with five auxiliary tasks to predict the target’s position, pedestrians’ positions, self state, teammates’ states, and team reward. To tackle the second challenge, we further introduce the Collaborative Triangulation Contribution Reward (CTCR), which incentivizes each agent according to its characteristic contribution to a 3D reconstruction. Inspired by the Shapley Value (Rapoport, 1970), CTCR computes the average weighted marginal contribution to the 3D reconstruction for any given agent over all possible coalitions that contain it. This reward aims to directly associate agents’ levels of participation with their adjusted return, guiding their policy learning when the team reward alone is insufficient to produce such direct association. Moreover, CTCR penalizes occluded camera agents more efficiently than the shared reward, encouraging emergent occlusion avoidance behaviors. Empirical results show that CTCR can accelerate convergence and increase reconstruction accuracy. Furthermore, CTCR is a general approach that can benefit policy learning in active 3D HPE and serve as a new assessment metric for view selection in other multi-view reconstruction tasks. For the evaluations of the learned policies, we build photo-realistic environments (UnrealPose) using Unreal Engine 4 (UE4) and UnrealCV (Qiu et al., 2017). These environments can simulate realistic-behaving crowds with assurances of high fidelity and customizability. We train the agents on a Blank environment and validate their policies on three unseen scenarios with different landscapes, levels of illumination, human appearances, and various quantities of cameras and humans. The empirical results show that our method can achieve more accurate and stable 3D pose estimates than off-the-shelf passive- and active-camera baselines. To help facilitate more fruitful research on this topic, we release our environments with OpenAI Gym-API (Brockman et al., 2016) integration and together with a dedicated visualization tool. Here we summarize the key contributions of our work: • Formulating the active multi-camera 3D human pose estimation problem as a Dec-POMDP and
proposing a novel multi-camera collaboration framework based on MARL (with n ≥ 3). • Introducing five auxiliary tasks to enhance the model’s ability to learn the dynamics of highly
dynamic scenes. • Proposing CTCR to address the credit assignment problem in MARL and demonstrating notable
improvements in reconstruction accuracy compared to both passive and active baselines. • Contributing high-fidelity environments for simulating realistic-looking human crowds with au-
thentic behaviors, along with visualization software for frame-by-frame video analysis.
2 RELATED WORK
3D Human Pose Estimation (HPE) Recent research on 3D human pose estimation has shown significant progress in recovering poses from single monocular images (Ma et al., 2021; Pavlakos et al., 2017a; Martinez et al., 2017; Kanazawa et al., 2018; Pavlakos et al., 2018; Sun et al., 2018; Ci et al., 2019; Zeng et al., 2020; Ci et al., 2020) or monocular video (Mehta et al., 2017; Hossain & Little, 2018; Pavllo et al., 2019; Kocabas et al., 2020). Other approaches utilize multi-camera systems for triangulation to improve visibility and eliminate ambiguity (Qiu et al., 2019; Jafarian et al., 2019; Pavlakos et al., 2017b; Dong et al., 2019; Lin & Lee, 2021b; Tu et al., 2020; Iskakov et al., 2019). However, these methods are often limited to indoor laboratory environments with fixed cameras. In contrast, our work proposes an active camera system with multiple mobile cameras for outdoor scenes, providing greater flexibility and adaptability.
Proactive Motion Capture Few previous works have studied proactive motion capture with a single mobile camera (Zhou et al., 2018; Cheng et al., 2018; Kiciroglu et al., 2019). In comparison, more works have studied the control of a multi-camera team. Among them, many are based on optimization with various system designs, including marker-based (Nägeli et al., 2018), RGBD-based (Xu et al., 2017), two-stage system (Saini et al., 2019; Tallamraju et al., 2019), hierarchical system (Ho et al., 2021), etc. It is important to note that all the above methods deal with static occlusion sources or clean landscapes. Additionally, the majority of these works adopt hand-crafted optimization objectives and some forms of fixed camera formations. These factors result in poor adaptability to dynamic scenes that are saturated with uncertainties. Recently, RL-based methods have received more attention due to their potential for dynamic formation adjustments. These works have studied active 3D HPE in the Gazebo simulation (Tallamraju et al., 2020) or Panoptic dome (Joo et al., 2015; Pirinen et al., 2019; Gärtner et al., 2020) for active view selection. Among them, AirCapRL (Tallamraju et al., 2020) shares similarities with our work. However, it is restricted to coordinating between two cameras in clean landscapes without occlusions. We study collaborations between multiple cameras (n ≥ 3) and resolve the credit assignment issue with our novel reward design (CTCR). Meanwhile, we study a more challenging scenario with multiple distracting humans serving as sources of dynamic occlusions, which requires more sophisticated algorithms to handle.
Multi-Camera Collaboration Many works in computer vision have studied multi-camera collaboration and designed active camera systems accordingly. Earlier works (Collins et al., 2003; Qureshi & Terzopoulos, 2007; Matsuyama et al., 2012) focused on developing a network of pan-tile-zoom (PTZ) cameras. Owing to recent advances in MARL algorithms (Lowe et al., 2017; Sunehag et al., 2017; Rashid et al., 2018; Wang et al., 2020; Yu et al., 2021; Jin et al., 2022), many works have formulated multi-camera collaboration as a multi-agent learning problem and solved it using MARL algorithms accordingly (Li et al., 2020; Xu et al., 2020; Wenhong et al., 2022; Fang et al., 2022; Sharma et al., 2022; Pan et al., 2022). However, most works focus on the target tracking problem, whereas this work attempts to solve the task of 3D HPE. Compared with the tracking task, 3D HPE has stricter criteria for optimal view selections due to correlations across multiple views, which necessitates intelligent collaboration between cameras. To our best knowledge, this work is the first to experiment with various camera agents (n ≥ 3) to learn multi-camera collaboration strategies for active 3D HPE.
3 PROACTIVE MULTI-CAMERA COLLABORATION
This section will explain the formulation of multi-camera collaboration in 3D HPE as a Dec-POMDP. Then, we will describe our proposed solutions for modelling the virtual environment’s complex dynamics and strengthening credit assignment in the multi-camera collaboration task.
3.1 PROBLEM FORMULATION
We formulate the multi-camera 3D HPE problem as a Decentralized Partially-Observable Markov Decision Process (Dec-POMDP), where each camera is considered as an agent which is decentralizedcontrolled and has partial observability over the environment. Formally, a Dec-POMDP is defined as ⟨S,O,An, n, P, r, γ⟩, where S denotes the global state space of the environment, including all human states and camera states in our problem. oi ∈ O denotes the agent i’s local observation, i.e., the RGB image observed by camera i. A denotes the action space of an agent and An represents the joint action space of all n agents. P : S × An → S is the transition probability function P (st+1|st,at), in which at ∈ An is a joint action by all n agents. At each timestep t, every agent obtains a local view oti from the environment s t and then preprocess oti to form i-th agent’s local
observation õti. The agent performs action a t i ∼ πti(·|õti) and receives its reward r(st,at). γ ∈ (0, 1] is the discount factor used to calculate the cumulative discounted reward G(t) = ∑
t′≥t γ t′−t r(t′).
In a cooperative team, the objective is to learn a group of decentralized policies {πi(ati|õti)} n i=1 that maximizes E(s,a)∼π[G(t)]. For convenience, we denote i as the agent index, JnK = {1, . . . , n} is the set of all n agents, and −i = JnK \ {i} are all n agents except agent i. Observation Camera agents have partial observability over the environment. The pre-processed observation õi = (pi, ξi, ξ−i) of the camera agent i consists of: (1) pi, a set of states of visible humans to agent i, containing information, including the detected human bounding-box in the 2D local view, the 3D positions and orientations of all visible humans measured in both local coordinate frame of camera i and world coordinates; (2) own camera pose ξi showing the camera’s position and orientation in world coordinates; (3) peer cameras poses ξ−i showing their positions and orientations in world coordinates and are obtained via multi-agent communication. Action Space The action space of each camera agent consists of the velocity of 3D egocentric translation (x, y, z) and the velocity of 2D pitch-yaw rotation (θ, ψ). To reduce the exploration space for state-action mapping, the agent’s action space is discretized into three levels across all five dimensions. At each timestep, the camera agent can move its position by [+δ, 0,−δ] in (x, y, z) directions and rotate about pitch-yaw axes by [+η, 0,−η] degrees. In our experiments, the camera’s pitch-yaw angles are controlled by a rule-based system.
3.2 FRAMEWORK
This section will describe the technical framework that constitutes our camera agents, which contains a Perception Module and a Controller Module. The Perception Module maps the original RGB images taken by the camera to numerical observations. The Controller Module takes these numerical observations and produces corresponding control signals. Fig. 2 illustrates this framework. Perception Module The perception module executes a procedure consisting of four sequential stages: (1) 2D HPE. The agent performs 2D human detection and pose estimation on the observed RGB image with the YOLOv3 (Redmon & Farhadi, 2018) detector and the HRNet-w32 (Sun et al., 2019) pose estimator, respectively. Both models are pre-trained on the COCO dataset, (Lin et al., 2014) and their parameters are kept frozen during policy learning of camera agents to ensure crossscene generalization. (2) Person ReID. A ReID model (Zhong et al., 2018c) is used to distinguish people in a scene. For simplicity, an appearance dictionary of all to-be-appeared people is built in advance following (Gärtner et al., 2020). At test time, the ReID network computes features for all detected people and identifies different people by comparing features to the pre-built appearance dictionary. (3) Multi-agent Communication. Detected 2D human pose, IDs, and own camera pose are broadcasted to other agents. (4) 3D HPE. 3D human pose is reconstructed via local triangulation after receiving communications from other agents. The estimated position and orientation of a person can then be extracted from the corresponding reconstructed human pose. The communication process is illustrated in Appendix Fig. 9.
Controller Module The controller module consists of a state encoder E and an actor network A. The state encoder E takes õti as input, encoding the temporal dynamics of the environment via LSTM. The future states of the target, pedestrians, and cameras are modelled using Mixture Density Network (MDN) (Bishop, 1994) to account for uncertainty. During model inference, it computes target position prediction, and then the (ϕ, µ, σ) parameters of target prediction MDN are used as a part of the inputs to the actor network to enhance feature encoding. Please refer to Section 3.4 for more details regarding training the MDN.
Feature Embedding zti = MLP(õ t i), (1)
Temporal Modeling hti = LSTM(z t i , h t−1 i ), (2)
Human Trajectory Prediction p̂t+1tgt/pd = MDN(z t i , h t i, p t tgt/pd), (3) Final Embedding eti = E ( õti, h t−1 i ) = Concat(z t i , h t i, {(ϕ, µ, σ)}MDNtgt ) , (4)
where ptgt and ppd refer to the state of the target and the pedestrian, respectively. The actor network A consists of 2 fully-connected layers that output the action, ati = A(E(õ t i, h t−1 i )).
3.3 REWARD STRUCTURE
To alleviate the credit assignment issue that arises in multi-camera collaboration, we propose the Collaborative Triangulation Contribution Reward (CTCR). We start by defining a base reward that reflects the reconstruction accuracy of the triangulated pose generated by the camera team. Then we explain how our CTCR is computed based on this base team reward.
Reconstruction Accuracy as a Team Reward To directly reflect the reconstruction accuracy, the reward function negatively correlates with the pose estimation error (Mean Per Joint Position Error, MPJPE) of the multi-camera triangulation. Formally,
r(X) = { 0, |X| ≤ 1, 1−Gemen (MPJPE(X)), |X| ≥ 2. (5)
Where the set X represents the participating cameras in triangulation, and employs the GemanMcClure smoothing function, Gemen(·) = 2(·/c) 2
(·/c)2+4 , to stabilize policy updates, where c = 50mm in our experiments. However, the shared team reward structure in our MAPPO baseline, where each camera in the entire camera team X receives a common reward r(X), presents a credit assignment challenge, especially when a camera is occluded, resulting in a reduced reward for all cameras. To address this issue, we propose a new approach called Collaborative Triangulation Contribution Reward (CTCR).
Collaborative Triangulation Contribution Reward (CTCR) CTCR computes each agent’s individual reward based on its marginal contribution to the collaborative multi-view triangulation. Refer to Fig. 3 for a rundown of computing CTCR for a 3-cameras team. The contribution of agent i can be measured by:
CTCR(i) = n · φr(i), φr(i) = ∑
S⊆JnK\{i}
|S|!(n− |S| − 1)! n! [r(S ∪ {i})− r(S)], (6)
Where n denotes the total number of agents. S denotes all the subsets of JnK not containing agent i. |S|!(n−|S|−1)!n! is the normalization term. [r(S ∪ {i}) − r(S)] means the marginal contribution of agent i. Note that ∑ i∈JnK φr(i) = r(JnK). We additionally multiply a constant n to rescale the CTCR to have the same scale as the team reward. Especially in the 2-cameras case, the individual CTCR should be equivalent to the team reward, i.e., CTCR(i = 1) = CTCR(i = 2) = r({1, 2}). For more explanations on CTCR, please refer to Appendix Section G.
3.4 LEARNING MULTI-CAMERA COLLABORATION VIA MARL
We employ the multi-agent learning variant of PPO (Schulman et al., 2017) called Multi-Agent PPO (MAPPO) (Yu et al., 2021) to learn the collaboration strategy. Alongside the RL loss, we jointly train the model with five auxiliary tasks that encourage comprehension of the world dynamics and the stochasticity in human behaviours. The pseudocode can be found in Appendix A.
World Dynamics Learning (WDL) We use the encoder’s hidden states (zti , hti) as the basis to model the world. Three WDL objectives correspond to modelling agent dynamics : (1) learning the forward dynamics of the camera P1(ξt+1i |zti , hti, ati), (2) prediction of team reward P2(rt|zti , hti, ati), (3) prediction of future position of peer agents P3(ξt+1−i |zti , hti, ati). Two WDL objectives correspond to modelling human dynamics: (4) prediction of future position of target person P4(pt+1tgt |zti , hti, pttgt), (5) prediction of future position of pedestrians, P5(pt+1pd |zti , hti, ptpd). All the probability functions above are approximated using Mixture Density Networks (MDNs) (Bishop, 1994).
Total Training Objectives LTrain = LRL + λWDL LWDL. The LRL is the reinforcement learning loss consisting of PPO-Clipped loss and centralized-critic network loss similar to MAPPO (Yu et al., 2021). LWDL = − 1n ∑ l λl ∑ i E[logPl(·|õti, hti, ati)] is the world dynamics learning loss that consists of MDN supervised losses on the five prediction tasks mentioned above.
4 EXPERIMENT
In this section, we first introduce our novel environment, UNREALPOSE, used for training and testing the learned policies. Then we compare our method with multi-passive-camera baselines and perform an ablation study on the effectiveness of the proposed CTCR and WDL objectives. Additionally, we evaluate the effectiveness of the learned policies by comparing them against other active multi-camera methods. Lastly, we test our method in four different scenarios to showcase its robustness.
4.1 UNREALPOSE: A VIRTUAL ENVIRONMENT FOR PROACTIVE HUMAN POSE ESTIMATION
We built four virtual environments for simulating active HPE in the wild using Unreal Engine 4 (UE4), which is a powerful 3D game engine that can provide real-time and photo-realistic renderings for making visually-stunning video games. The environments handle the interactions between realisticbehaving human crowds and camera agents. Here is a list of characteristics of UNREALPOSE that we would like to highlight: Realistic: Diverse generations of human trajectories, built-in collision avoidance, and several scenarios with different human appearance, terrain, and level of illumination. Flexibility: extensive configuration in numbers of humans, cameras, or their physical properties, more than 100 MoCap action sequences incorporated. RL-Ready: integrated with OpenAI Gym API, overhaul the communication module in the UnrealCV (Qiu et al., 2017) plugin with inter-process communication (IPC) mechanism. For more detailed descriptions, please refer to Appendix Section B.
4.2 EVALUATION METRICS
We use Mean Per Joint Position Error (MPJPE) as our primary evaluation metric, which measures the difference between the ground truth and the reconstructed 3D pose on a per-frame basis. However, using MPJPE alone may not provide a complete understanding of the robustness of a multi-camera collaboration policy for two reasons: Firstly, cameras adjusting their perception quality may take multiple frames to complete, and secondly, high peaks in MPJPE may be missed by the mean aggregation. To address this, we introduce the “success rate” metric, which evaluates the smooth execution and robustness of the learned policies. Success rate is calculated as the ratio of frames in an episode with MPJPE lower than τ . Formally, SuccessRate(τ) = P (MPJPE ≤ τ). This metric is a temporal measure that reflects the integrity of multi-view coordination. Poor coordination may cause partial occlusions or too many overlapping perceptions, leading to a significant increase in MPJPE and a subsequent decrease in the success rate.
4.3 RESULTS AND ANALYSIS
The learning-based control policies were trained in a total of 28 instances of the BlankEnv, where each instance uniformly contained 1 to 6 humans. Each training run consisted of 700,000 steps, which corresponds to 1,000 training iterations. To ensure a fair evaluation, we report the mean metrics based on the data from the latest 100 episodes, each comprising 500 steps. The experiments were conducted in a 10m× 10m area, where the cameras and humans interacted with each other.
Active vs. Passive To show the necessity of proactive camera control, we compare the active camera control methods with three passive methods, i.e., Fixed Camera, Fixed Camera (RANSAC), and Fixed Camera (PlaneSweepPose). “Fixed Cameras” denotes that the poses of the cameras are fixed, hanging 3m above ground and −35◦ camera pitch angles. The placements of these fixed cameras are carefully determined with strong priors, e.g., right-angle, triangle, square, and pentagon formations for 2, 3, 4, and 5 cameras, respectively. “RANSAC” denotes the method that uses RANSAC (Fischler & Bolles, 1981) for enhanced triangulation. “PlaneSweepPose” represents the off-the-shelf learning-based method (Lin & Lee, 2021b) for multi-view 3D HPE. Please refer to Appendix E.2 for more implementation details. We show the MPJPE and Success Rate versus a
different number of cameras in Fig. 5. We observe that all passive baselines are being outperformed by the active approaches due to their inability to adjust camera views against dynamic occlusions. The improvement of active approaches is especially significant when the number of cameras is less, i.e. when the camera system has little or no redundancy against occlusions. Notably, the MPJPE attained by our 3-camera policy is even lower than the MPJPE from 5 Fixed cameras. This suggests that proactive strategies can help reduce the number of cameras necessary for deployment.
The Effectiveness of CTCR and WDL We also perform ablation studies on the two proposed modules (CTCR and WDL) to analyze their benefits to performance. We take “MAPPO” as the active-camera baseline for comparison, which is our method but trained instead by a shared global reconstruction reward and without world dynamics modelling. Fig. 5 shows a consistent performance gap between the “MAPPO” baseline and our methods (MAPPO + CTCR + WDL). The proposed CTCR mitigates the credit assignment issue by computing the weighted marginal contribution of each camera. Also, CTCR promotes faster convergence. Training curves are shown in Appendix Fig. 13. Training with WDL objectives further improves the MPJPE metric for our 2-Camera model. However, its supporting effect gradually weakens with the increasing number of cameras. We argue that is caused by the more complex dynamics involved with more cameras simultaneously interacting in the same environment. Notably, we observe that the agents trained with WDL are of better generalization in unseen scenes, as shown in Fig. 6.
Versus Other Active Methods To show the effectiveness of the learned policies, we further compare our method with other active multi-camera formation control methods in 3-cameras BlankEnv. “MAPPO” (Yu et al., 2021) and AirCapRL (Tallamraju et al., 2020) are two learning-based methods based on PPO (Schulman et al., 2017). The main difference between these two methods is the reward shaping technique, i.e., AirCapRL additionally employs multiple carefully-designed rewards (Tallamraju et al., 2020) for learning. We also programmed a rule-based fixed formation control method (keeping an equilateral triangle, spread by 120◦) to track the target person. Results are shown in Table 1. Interestingly, these three baselines achieve comparable performance. Our method outperforms them, indicating a more effective multi-camera collaboration strategy for 3D HPE. For example, our method learns a spatially spread-out formation while automatically adjusting to avoid impending occlusion.
Generalize to Various Scenarios We train the control policies in BlankEnv while testing them in three other realistic environments (SchoolGym, UrbanStreet, and Wilderness) to evaluate their generalizability to unseen scenarios. Fig. 6 shows that our method consistently outperforms baseline
methods with lower variance in MPJPE during the evaluations in three test environments. We report the results in the BlankEnv as a reference.
Qualitative Analysis In Figure 7, we show six examples of the emergent formations of cameras under the trained policies using the proposed methods (CTCR + WDL). The camera agents learn to spread out and ascent above humans to avoid occlusions and collision. Their placements in an emergent formation are not assigned by other entities but rather determined by the decentralized control policies themselves based on local observations and agent-to-agent communication. For more vivid examples of emergent formations, please refer to the project website for the demo videos.1 For more analysis on the behaviour mode of our 3-, 4- and 5-camera models, please refer to Appendix Section H.
5 CONCLUSION AND DISCUSSION
To our knowledge, this paper presents the first proactive multi-camera system targeting the 3D reconstruction in a dynamic crowd. It is also the first study regarding proactive 3D HPE that promptly experimented with multi-camera collaborations at different scales and scenarios. We propose CTCR to alleviate the multi-agent credit assignment issue when the camera team scales up. We identified multiple auxiliary tasks that improve the representation learning of complex dynamics. As a final note, we release our virtual environments and the visualization tool to facilitate future research.
Limitations and Future Directions Admittedly, a couple of aspects of this work have room for improvement. Firstly, the camera agents received the positions for their teammates via non-disrupting broadcasts of information, which may be prone to packet losses during deployment. One idea is to incorporate a specialized protocol for multi-agent communication into our pipeline, such as ToM2C(Wang et al., 2022). Secondly, intended initially to reduce the communication bandwidth (for example, to eliminate the need for image transmission between cameras), the current pipeline comprises a Human Re-Identification module requiring a pre-scan memory on all to-be-appeared human subjects. For some out-of-distribution humans’ appearances, the current ReID module may not be able to recognize. Though ACTIVE3DPOSE can accommodate a more sophisticated ReID module (Deng et al., 2018) to resolve this shortcoming. Thirdly, the camera control policy requires accurate camera pose, which may need to develop a robust SLAM system (Schmuck & Chli, 2017; Zhong et al., 2018b) to work in dynamic environments with multiple cameras. Fourthly, the motion patterns of targets in virtual environments are based on manually designed animations, which will lead to the poor generalization of the agents on unseen motion patterns. In the future, we can enrich the diversity by incorporating a cooperative-competitive multi-agent game (Zhong et al., 2021) in training. Lastly, we assume near-perfect calibrations for a group of mobile cameras, which might be complicated to sustain in practice. Fortunately, we are seeing rising interest in parameter-free pose estimation (Gordon et al., 2022; Ci et al., 2022), which does not require online camera calibration and may help to resolve this limitation.
1Project Website for demo videos: https://sites.google.com/view/active3dpose
6 ETHICS STATEMENT
Our research into active 3D HPE technologies has the potential to bring many benefits, such as biomechanical analysis in sports and automated video-assisted coaching (AVAC). However, we recognize that these technologies can also be misused for repressive surveillance, leading to privacy infringements and human rights violations. We firmly condemn such malicious acts and advocate for the fair and responsible use of our virtual environment, UNREALPOSE, and all other 3D HPE technologies.
ACKNOWLEDGEMENT
The authors would like to thank Yuanfei Wang for discussions on world models; Tingyun Yan for his technical support on the first prototype of UNREALPOSE. This research was supported by MOST-2022ZD0114900, NSFC-62061136001, China National Post-doctoral Program for Innovative Talents (Grant No. BX2021008) and Qualcomm University Research Grant.
A TRAINING ALGORITHM PSUEDOCODE
Algorithm 1 Learning Multi-Camera Collaboration (CTCR + WDL) 1: Initialize: n agents with a tied-weights MAPPO policy π and mixture density models (MDN) for WDL
prediction models {(Pself, Preward, Ppeer, Ptgt, Ppd)}π , E parallel environment rollouts 2: for Iteration = 1, 2, . . .M do 3: In each E environment rollouts, each agent i ∈ JnK collects trajectory with length T :
τ = [ (õti, õ t+1 i , a t i, r t i , h t−1 i , s t,at−1, rt) ]T t=1
4: Substitute individual reward of each agent rt with CTCR: rti ← CTCR(i) (equation 6) 5: For each step in τ , compute advantage estimates Â1i , . . . , Â T i with GAE (Schulman et al., 2015) for
each agent i ∈ JnK 6: Yield a training batch D of the size E × T × n 7: for Mini-batch SGD Epoch = 1, 2, . . . ,K do 8: Sample a stochastic mini-batch of size B form D, which B = |D|/K 9: Compute zti and h t i using the encoder model Eπ
10: Compute PPO-CLIP objective loss LPPO, global critic value loss LValue, adaptive KL loss LKL 11: Compute the objectives that constitutes LWDL:
• Self-State Prediction Loss Lself = −Eτ [logP (ξt+1i |z t i , h t i, a t i)] • Reward Prediction Loss Lreward = −Eτ [logP (rt|zti , hti, ati)] • Peer-State Prediction Loss Lpeer = −Eτ [logP (ξt+1−i |z t i , h t i, a t i)] • Target Prediction Loss Ltgt = −Eτ [logP (pt+1tgt |zti , hti, pttgt)] • Pedestrians Prediction Loss Lpd = −Eτ [logP (pt+1pd |z t i , h t i, p t pd)]
12: LTrain = λPPOLPPO + λValueLValue + βKLLKL + λWDLLWDL 13: Optimize LTrain w.r.t to the current policy parameter θπ
B UNREALPOSE: ACCESSORIES AND MISCELLANEOUS ITEMS
Our UnrealPose virtual environment supports different active vision tasks, such as active human pose estimation and active tracking. This environment also supports various settings ranging from single-target single-camera settings to multi-target multi-camera settings. Here we provide a more detailed description of the three key characteristics of UnrealPose:
Realistic The built-in navigation system governs the collision avoidance movements of virtual humans against dynamic obstacles. In the meantime, it also ensures diverse generations of walking trajectories. These features enable the users to simulate a realistic-looking crowd exhibiting socially acceptable behaviors. We have also provided several pre-set scenes, e.g., school gym, wilderness, urban crossing, and etc. These scenes have notable differences in illuminations, terrains, and crowd appearances to reflect the dramatically different-looking scenarios in real-life.
Extensive Configuration The environment can be configured with different numbers of humans and cameras and swapped across other scenarios with ease, which we demonstrated in Fig 4(a-d). Besides simulating walking human crowds, the environment incorporates over 100 Mocap action sequences with smooth animation interpolations to enrich the data variety for other MoCap tasks.
RL-Ready We use the UnrealCV (Qiu et al., 2017) plugin as the medium to acquire images and annotations from the environment. The original UnrealCV plugin suffers from unstable data transfer and unexpected disconnections under high CPU workloads. To ensure fast and reliable data acquisition for large-scale MARL experiments, we overhauled the communication module in the UnrealCV plugin with inter-process communication (IPC) mechanism, which eliminates the aforementioned instabilities.
B.1 VISUALIZATION TOOL
We provide a visualization tool to facilitate per-frame analysis of the learned policy and reconstruction results (shown in Fig. 8). The main interface consists of four parts: (1) Live 2D views from all cameras. (2) 3D spatial view of camera positions and reconstructions. (3) Plot of statistics. (4) Frame control bar. This visualization tool supports different numbers of humans and cameras. Meanwhile, it is written in Python to support easy customization.
B.2 LICENSE
All assets used in the environment are commercially-available and obtained from the UE4 Marketplace. The environment and tools developed in this work are licensed under Apache License 2.0.
C OBSERVATION PROCESSING
Fig.9 shows the pipeline of the observation processing. Each camera observes an RGB image and detects the 2D human poses and IDs via the Perception Module described in the main paper. The camera pose, 2D human poses and IDs are then broadcast to other cameras for multi-view 3D triangulation. The human position is calculated as the median of the reconstructed joints. Human orientation is calculated from the cross product of the reconstructed shoulder and spine.
D IMPLEMENTATION DETAILS
D.1 TRAINING DETAILS
All control policies are trained in the BlackEnv scene. At the testing stage, we apply zeroshot transfer for the learned policies to three realistic scenes: SchoolGym, UrbanStreet, and Wilderness.
To simulate a dynamic human crowd that performs high random behaviors, we sample arbitrary goals for each human and employ the built-in navigation system to generate collision-free trajectories. Each human walks at a random speed. To ensure the generalization across a different number of humans, we train our RL policy with a mixture of environments of 1 to 6 humans. The learning rate is set to 5× 10−4 with scheduled decay during the training phase. The annealing schedule for the learning rate is detailed in Table. 2. The maximum episode length is 500 steps, discounted factor γ is 0.99 and the GAE horizon is 25 steps. Each sampling iteration produces a training batch with the size of 700 steps, then we perform 16 iterations on every training batch with 2 SGD mini-batch updates for each iteration (i.e. SGD batch size = 350).
Table 2 shows the common training hyper-parameters shared between the baseline models (MAPPO) and all of our methods. Table. 4 shows the hyperparameters for the WDL module.
D.2 DIMENSIONS OF FEATURE TENSORS IN THE CONTROLLER MODULE
Table 3 serves as a complementary description for Eqn. 1, 2, 3, 4. The table shows the dimensions of the feature tensors used in the controller module.“B” denotes the batch size. In the current model design, the dimension of local observation is adjusted based on the maximum number of camera agents (N_cammax) and the maximum number of observable humans (N_humanmax) in an environment. In our experiments, N_humanmax has been set to 7. The observation pre-processor will zero-pad each observation to a length equal N_humanmax × 18 if the current environment
instance has less than N_humanmax humans. “9” and “18” correspond to the feature length for a camera and a human, respectively. The MDN of the Target Prediction module has 16 Gaussian components, in which each component outputs (ϕ, µx, σx, µy, σy) and ϕ is the weight parameter of a component. The current implementation of MDN only predicts the x-y location of a human, which is a simplification since the z coordinate of a simulated human barely changes across an episode compared to the x and y coordinates. The dimension of an MDN output has a length of 80 and the exact prediction is produced by ϕ-weighted averaging. In Eqn. 4, {(ϕ, µ, σ)}MDNtgt is an encoded feature produced by passing the MDN output to a 2-layer MLP that has an output dimension of 128.
D.3 COMPUTATIONAL RESOURCES
We used 15 Ray workers for each experiment to ensure consistency in the training procedure. Each worker carries a Gym vectorized environment consisting of 4 actual environment instances. Each worker demands approximately 3.7GB of VRAM. We run each experiment with 8 NVIDIA RTX 2080 Ti GPUs. Depending on the number of camera agents, the total training hours required for an experiment to run 500k steps will vary between 4 to 12 hours.
D.4 TRAINING CURVE
Fig. 13 shows the training curves of our method and the baseline method. We can find that our methods converge faster and improve reconstruction accuracy than the baseline method.
D.5 TOTAL INFERENCE TIME
Our solution can run in real-time. Table 5 reports the inference time for each module of the proposed Active3DPose pipeline.
E ADDITIONAL EXPERIMENT RESULTS
E.1 ABLATION STUDY ON WDL OBJECTIVES
We perform a detailed ablation study regarding the effect of each WDL sub-task on the model performance. As shown in Fig. 11, we can observe that the MPJPE metric gradually decreases as we incorporate more WDL losses. This aligns with our assumptions that training the model with world dynamics learning objectives will promote the model’s ability to capture a better representation of future states, which in turn increases performance. Our method additionally demonstrates the importance of incorporating information regarding the target’s future state into the encoder’s output features. Predicting the target’s future states should not only be used as an auxiliary task but should also directly influence the inference process of the actor model.
E.2 BASELINE — FIXED-CAMERAS
In addition to the triangulation and RANSAC baselines introduced in the main text, we compare and elaborate on two more baselines that use fixed cameras: (1) fixed cameras with RANSAC-based triangulation and temporal smoothing (TS) (2) an off-shelf 3D pose estimator PlaneSweepPose (Lin & Lee, 2021a).
In the temporal smoothing baseline, we applied a low-pass filter (Casiez et al., 2012) and temporal fusion where the algorithm will fill in any missing key points in the current frame with the detected key points from the last frame.
In the PlaneSweep baseline, as per the official instructions, we train three separate models (3 cams to 5 cams) with the same camera setup as in our testing scenarios. We have tested the trained models in different scenarios and reported the MPJPE results in Table 6, 7, 8 and 9. Note that, this off-shelf pose estimator performs better than the Fixed-Camera Baseline (Triangulation) but still underperforms compared to our active method. Fig. 12 illustrates the formations of the fixed camera baselines.
Camera placements for all fixed-camera baselines are shown in Fig. 12. These formations are carefully designed not to disadvantage the fixed-camera baselines on purpose. Especially for the 5-cameras pentagon formation, which helps the fixed-camera baseline to obtain satisfactory performance on the 5-camera setting as shown in Tables 6, 7, 8 and 9.
E.3 OUR METHOD ENHANCED WITH RANSAC-BASED TRIANGULATION
RANSAC is a generic technique that can be used to improve triangulation performance. In this experiment, we also train and test our model with RANSAC. The final result (Table 11) shows a further improvement on our original triangulation version.
F GENERATING SAFE AND SMOOTH TRAJECTORIES
F.1 COLLISION AVOIDANCE
In order to generate safe trajectories, in this section, we introduce and evaluate two different ways to enforce collision avoidance between cameras and humans.
Obstacle Collision Avoidance (OCA) OCA resembles a feed-forward PID controller on the final control outputs before execution. Concretely, OCA adds a constant reverse quantity to the control output if it detects any surrounding objects within its safety range. This “detouring” mechanism safeguards the cameras from possible collisions and prevents them from making dangerous maneuvers.
Action-Masking (AM) AM also resembles a feed-forward controller but is instead embedded into the forward step of the deep-learning model. At each step, AM module first identifies the dangerous actions among all possible actions, then modifies the probabilities (output by the policy model) of choosing the hazardous actions to be zero so that the learning-based control policy will never pick them. Note that AM must be trained with the MARL policy model.
We proposed the minimum Camera-Human distance (in which the scope includes the target and the pedestrians) as the safety metric. It measures the distance between the closest human and the camera at a timestep. Fig. 14 shows the histograms of Min Camera-Human distance sampled over five episodes of 500 steps.
F.2 TRAJECTORY SMOOTHING
Exponential Moving Average (EMA) To generate smooth trajectories, we introduce EMA to smooth the outputs of our learned policy model. EMA is a common technique used in smoothing time-series data. In our case, the smoothing operator is defined as :
ât = ât−1 + η · (at − ât−1)
Where at is the action (or control signal) output by the model at the current step, ât−1 is the smoothed action from the last step and η is the smoothing factor. A smaller η results in greater smoothness. ât is the smoothed action that the camera will execute.
F.3 ROBUSTNESS OF THE LEARNED POLICY
In this section, we evaluate the robustness of our model on different external perturbations to the control signal. In conclusion, our model shows resilience against the delay effect and random noise.
Delay EMA also brings a delay effect while smoothing the generated trajectory. The level of smoothing positively correlates with a larger delay factor. Here, we evaluate our model’s robustness to the EMA simulated delay.
Random Action Noise Control devices in real-life are inevitably affected by errors. For example, the control signal of a controller may be over-damped or may overshoot. We simulated this type of random error by multiplying the output action by a uniformly-sampled noise.
In Table 12, we intend to observe the effects of EMA delay and random action noise on the reconstruction accuracy of our model, marked as “Vanilla”.
G MORE EXPLANATIONS ON CTCR
Figure 3 is an example of using Eq. 6 to compute CTCR for each of the three cameras. The CTCR is incentivized by the Shapley Value.The main idea is that the overall optimality needs to also account for the optimality of every possible sub-formation. For a camera agent to receive the highest CTCR possible, its current position and view must be optimal both in terms of its current formation and any sub-formation possible.
Note: a group of collaborating players is often referred to as a “coalition” in other literatures. Here we apply the same concept but to a group of cameras, so we used the more initiative term “formation” instead.
Eq. 6 can be further breakdown as follows:
φr(i) = ∑
S⊆JnK\{i}
|S|!(n− |S| − 1)! n! [r(S ∪ {i})− r(S)]
= 1
n ∑ S⊆JnK\{i} |S|!(n− 1− |S|)! (n− 1)! [r(S ∪ {i})− r(S)]
= 1
n ∑ S⊆JnK\{i} ( n− 1 |S| )−1 [r(S ∪ {i})− r(S)]
where JnK = 1, 2, . . . , n denotes the set of all cameras and the binomial coefficient ( k n ) =
n! k!(n−k)! , 0 ≤ k ≤ n. S denotes a formation (a subset) without camera i. ( n−1 |S| ) is the number of combinations of subset S (i.e., the binomial coefficient), which serves as a normalization term. r(S) computes the reconstruction accuracy of the formation S. And [r(S ∪ {i})− r(S)] computes the marginal improvement after adding the camera i to sub-formation S. So this equation means we iterate over all possible S and compute the marginal contribution of camera i and average over all possible combinations of (S, i).
Suppose we have a 3-cameras formation, similarly shown in Figure 3. So n = 3, the number of cameras. Let’s name these cameras (1,2,3), and let’s say we only care about Camera 1 for now. Since we are computing the average marginal contribution for Camera 1, we are looking for those formations that do not have Camera 1 because we want to see how much of an increase in performance resulted from the addition of Camera 1 to those formations. For all possible formations denoted by the set JnK, four formations satisfy this condition, S ⊆ JnK \ {i} −→ S ∈ (∅, {2}, {3}, {2, 3}). The binomial coefficient ( n−1 |S| ) for a 2-camera sub-formation in a 3-cameras case is ( 2 2 ) , which makes sense because there exists only one unique combination that does not contain Camera 1, which is sub-formation {2, 3}. r({2, 3}) computes the reconstruction accuracy of the formation {2, 3} and r({2, 3} ∪ {1}) computes the reconstruction accuracy after adding Camera 1 to sub-formation {2, 3}. Their difference gives us the marginal contribution of Camera 1. As we sum over all subsets S of JnK not containing Camera 1, and then divide by ( n−1 |S| ) and the number of cameras n, we have the average marginal contribution of Camera 1 (φr({1})) to the collaborative triangulated reconstruction. Further multiplying this term by n, you have the CTCR1 as shown in Eq. 6.
H ANALYSIS ON MODES OF BEHAVIORS OF THE TRAINED AGENTS
Here in Figure 15 we provide statistics and analysis on the behaviour mode of the agents controlled by our 3-, 4- and 5-camera policies, respectively. We are interested in understanding the characteristics of the emergent formations learned by our model. Hence we proposed three quantitative measures to understand the topology of the emergent formations: (1) the min-angle between the cameras’ orientation and the per-frame-mean of the min-camera angle, (2) the camera’s pitch angle, and (3) the camera-human distance. Hence we provide rigorous definitions as follows:
min-camera angle(i) = min j ̸=i
⟨axis of camera i, axis of camera j⟩
per-frame-mean of min-camera angle = 1
n ∑ i∈[n] min-camera angle(i)
In simpler terms, min-camera angle(i) finds the minimum angle between camera i and any other camera j. “per-frame-mean of min-camera angle” is the mean of min-camera angle(i) for all camera i in one frame. A positive camera’s pitch angle means looking upward, and a negative camera’s pitch angle means looking downward. The camera-human distance measures the distance between the target human and the given camera.
Regarding the distance between cameras and humans, the camera agents actively adjust their position relative to the target human. The cameras learn to keep a camera-human distance between 2m and 3m. It is neither too distant from the target human nor too close, violating the safety constraint. The camera agents surround the target at an appropriate distance to have a better resolution on the 2D views. In the meantime, as the safe-distance constraint is enforced during training, the camera agents are prohibited from aggressively minimizing the camera-human distance.
The histograms for the camera’s pitch angle suggest that the cameras mostly maintain negative pitch angles. Their preferred strategy is to hover over the humans and capture images at a higher altitude. This is likely because of emergent occlusion avoidance. The cameras also emerge to fly at an even level with the humans (where the pitch angle approximately equals 0) to capture more accurate 2D poses of the target human. This propensity is apparent from the peaks at x = 0 angles in the histograms. A relatively wide distribution of the pitch angle histogram suggests that the camera formation is spread out in the 3D space and dynamic adjustments of flying heights and pitch angles by the camera agents.
For the average angle between the cameras’ orientations, this statistic shows that the cameras in various formations will maintain reasonable non-zero spatial angles between each other. Therefore, their camera views are less likely to coincide and provide more diverse perspectives to generate a more reliable 3D triangulation. | 1. What is the focus and contribution of the paper regarding active vision schemes?
2. What are the strengths and weaknesses of the proposed approach, particularly in terms of reward formulation and multi-agent reinforcement learning?
3. Do you have any concerns or suggestions regarding the presentation of the paper, such as providing more detailed captions or clarifying variables?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any limitations or potential issues with the proposed method that the authors should address? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The paper develops an active vision scheme where multiple non-stationary cameras reconfigure themselves to achieve high-quality 3D human pose estimation. The camera control problem is cast within the multi-agent reinforcement learning framework. The paper also presents a new reward formulation that incentivizes the cameras according to their weighted marginal contribution to the 3D human pose reconstruction quality. This reward seems to address the multi-agent credit assignment issue. The proposed model is trained within a 3D environment populated with lifelike pedestrians. The model is jointly trained with world-dynamics-learning task; therefore, it exhibits anticipatory occlusion avoidance, improving the reconstruction accuracy.
Strengths And Weaknesses
This is a well-written paper that studies an interesting problem. The paper claim four "key" contributions; however, not all of these contributions have the same scientific or technological impact. I suggest to divide this list into primary and secondary lists. The key conributions, I feel, are: 1) problem setup and 2) the new reward formulation. The engineering effort in setting up the 3D world that others can use for their own work seems to me a secondary contribution.
When speaking of active camera collaboration schemes for scene analysis and pedestrian tracking and the use of 3D worlds to study such multicamera systems, the following work comes to mind
Smart Camera Networks in Virtual Reality. Qureshi, F.Z.; and Terzopoulos, D. Proceedings of the IEEE (Special Issue on "Smart Cameras"), 96(10): 1640–1656. October 2008.
This is highly relevant to the work presented in this paper, and it should be cited in Multicamera Collaboration Section. This work studies multi-camera control and uses a 3D world with autonomous pedestrians to develop and evaluate the camera coordination on the task of pedestrian tracking.
It may be useful to provide a more detailed caption for Figure 2. Specifically, please define the variables not defined anywhere else.
It will be beneficial to provide the number of dimensions for each variable used in equations 1 to 4. I feel that this will increase the clarity of the proposed approach.
In Section 3.4 what is the difference between the future position of target person and the future position of pedestrians?
I tried but failed to wrap my head around Eq. 6. Perhaps it is possible to revise this discussion. One possibility is to construct Eq. 6 term by term. This equation deals with the reward for each camera, which, alongwith problem setup, I feel is one of the primary scientific contribution of this work.
I was pleased to see that the paper includes a discussion about the limitations of the proposed method. Perhaps this discussion should be expanded. For example, it seems that this work assumes near-perfect camera calibration, wich is not always possible to achieve in practice. It is especially hard to maintain the calibration of a group of cameras that move around.
I was also pleased see the paragraph about the ethical consideration of using such systems. I ask that this dicussion should be moved to the main body of the apaper. It is too important to be relegated to the appendix.
Please use a different "bibstyle." As it stands it is tedious to connect a citation to its reference.
The paper asserts that the group of cameras learn occlusion-avoiding anticipatory behavior. Actually this was one of the reasons why the model was also trained on world-dynamics-learning-tasks. Perhaps there is a way to provide some results that support this claim.
Clarity, Quality, Novelty And Reproducibility
The paper is well-written. The work is novel in the sense that it proposes a new formulation for the control and coordination of a group of non-stationary cameras for the task of 3D human pose estimation. The work suggests a large engineering effort---from setting up a 3D environment to training a large, non-trivial reinforcement learning model. I like the fact that the paper promises to open-source the virtual environment and the visualization software. |
ICLR | Title
Proactive Multi-Camera Collaboration for 3D Human Pose Estimation
Abstract
This paper presents a multi-agent reinforcement learning (MARL) scheme for proactive Multi-Camera Collaboration in 3D Human Pose Estimation in dynamic human crowds. Traditional fixed-viewpoint multi-camera solutions for human motion capture (MoCap) are limited in capture space and susceptible to dynamic occlusions. Active camera approaches proactively control camera poses to find optimal viewpoints for 3D reconstruction. However, current methods still face challenges with credit assignment and environment dynamics. To address these issues, our proposed method introduces a novel Collaborative Triangulation Contribution Reward (CTCR) that improves convergence and alleviates multi-agent credit assignment issues resulting from using 3D reconstruction accuracy as the shared reward. Additionally, we jointly train our model with multiple world dynamics learning tasks to better capture environment dynamics and encourage anticipatory behaviors for occlusion avoidance. We evaluate our proposed method in four photo-realistic UE4 environments to ensure validity and generalizability. Empirical results show that our method outperforms fixed and active baselines in various scenarios with different numbers of cameras and humans. (a) Dynamic occlusions lead to failed reconstruction (b) Constrained MoCap area Active MoCap in the wild Figure 1: Left: Two critical challenges in fixed camera approaches. Right: Three active cameras collaborate to best reconstruct the 3D pose of the target (marked in ).
1 INTRODUCTION
Marker-less motion capture (MoCap) has broad applications in many areas such as cinematography, medical research, virtual reality (VR), sports, and etc. Their successes can be partly attributed to recent developments in 3D Human pose estimation (HPE) techniques (Tu et al., 2020; Iskakov et al., 2019; Jafarian et al., 2019; Pavlakos et al., 2017b; Lin & Lee, 2021b). A straightforward implementation to solve multi-views 3D HPE is to use fixed cameras. Although being a convenient solution, it is less effective against dynamic occlusions. Moreover, fixed camera solutions confine tracking targets within a constrained space, therefore less applicable to outdoor MoCap. On the contrary, active cameras (Luo et al., 2018; 2019; Zhong et al., 2018a; 2019) such as ones mounted on drones can maneuver proactively against incoming occlusions. Owing to its remarkable flexibility, the active approach has thus attracted overwhelming interest (Tallamraju et al., 2020; Ho et al., 2021; Xu et al., 2017; Kiciroglu et al., 2019; Saini et al., 2022; Cheng et al., 2018; Zhang et al., 2021).
∗Equal Contribution. BCorresponding author. Project Website: https://sites.google.com/view/active3dpose
Previous works have demonstrated the effectiveness of using active cameras for 3D HPE on a single target in indoor (Kiciroglu et al., 2019; Cheng et al., 2018), clean landscapes (Tallamraju et al., 2020; Nägeli et al., 2018; Zhou et al., 2018; Saini et al., 2022) or landscapes with scattered static obstacles (Ho et al., 2021). However, to the best of our knowledge, we have not seen any existing work that experimented with multiple (n > 3) active cameras to conduct 3D HPE in human crowd. There are two key challenges : First, frequent human-to-human interactions lead to random dynamic occlusions. Unlike previous works that only consider clean landscapes or static obstacles, dynamic scenes require frequent adjustments of cameras’ viewpoints for occlusion avoidance while keeping a good overall team formation to ensure accurate multi-view reconstruction. Therefore, achieving optimality in dynamic scenes by implementing a fixed camera formation or a hand-crafted control policy is challenging. In addition, the complex behavioural pattern of a human crowd makes the occlusion patterns less comprehensible and predictable, further increasing the difficulty in control. Second, as the team size grows larger, the multi-agent credit assignment issue becomes prominent which hinders policy learning of the camera agents. Concretely, multi-view 3D HPE as a team effort requires inputs from multiple cameras to generate an accurate reconstruction. Having more camera agents participate in a reconstruction certainly introduces more redundancy, which reduces the susceptibility to reconstruction failure caused by dynamic occlusions. However, it consequently weakens the association between individual performance and the reconstruction accuracy of the team, which leads to the “lazy agent” problem (Sunehag et al., 2017). In this work, we introduce a proactive multi-camera collaboration framework based on multi-agent reinforcement learning (MARL) for real-time distributive adjustments of multi-camera formation for 3D HPE in a human crowd. In our approach, multiple camera agents perform seamless collaboration for successful reconstructions of 3D human poses. Additionally, it is a decentralized framework that offers flexibility over the formation size and eliminates dependency on a control hierarchy or a centralized entity. Regarding the first challenge, we argue that the model’s ability to predict human movements and environmental changes is crucial. Thus, we incorporate World Dynamics Learning (WDL) to train a state representation with these properties, i.e., learning with five auxiliary tasks to predict the target’s position, pedestrians’ positions, self state, teammates’ states, and team reward. To tackle the second challenge, we further introduce the Collaborative Triangulation Contribution Reward (CTCR), which incentivizes each agent according to its characteristic contribution to a 3D reconstruction. Inspired by the Shapley Value (Rapoport, 1970), CTCR computes the average weighted marginal contribution to the 3D reconstruction for any given agent over all possible coalitions that contain it. This reward aims to directly associate agents’ levels of participation with their adjusted return, guiding their policy learning when the team reward alone is insufficient to produce such direct association. Moreover, CTCR penalizes occluded camera agents more efficiently than the shared reward, encouraging emergent occlusion avoidance behaviors. Empirical results show that CTCR can accelerate convergence and increase reconstruction accuracy. Furthermore, CTCR is a general approach that can benefit policy learning in active 3D HPE and serve as a new assessment metric for view selection in other multi-view reconstruction tasks. For the evaluations of the learned policies, we build photo-realistic environments (UnrealPose) using Unreal Engine 4 (UE4) and UnrealCV (Qiu et al., 2017). These environments can simulate realistic-behaving crowds with assurances of high fidelity and customizability. We train the agents on a Blank environment and validate their policies on three unseen scenarios with different landscapes, levels of illumination, human appearances, and various quantities of cameras and humans. The empirical results show that our method can achieve more accurate and stable 3D pose estimates than off-the-shelf passive- and active-camera baselines. To help facilitate more fruitful research on this topic, we release our environments with OpenAI Gym-API (Brockman et al., 2016) integration and together with a dedicated visualization tool. Here we summarize the key contributions of our work: • Formulating the active multi-camera 3D human pose estimation problem as a Dec-POMDP and
proposing a novel multi-camera collaboration framework based on MARL (with n ≥ 3). • Introducing five auxiliary tasks to enhance the model’s ability to learn the dynamics of highly
dynamic scenes. • Proposing CTCR to address the credit assignment problem in MARL and demonstrating notable
improvements in reconstruction accuracy compared to both passive and active baselines. • Contributing high-fidelity environments for simulating realistic-looking human crowds with au-
thentic behaviors, along with visualization software for frame-by-frame video analysis.
2 RELATED WORK
3D Human Pose Estimation (HPE) Recent research on 3D human pose estimation has shown significant progress in recovering poses from single monocular images (Ma et al., 2021; Pavlakos et al., 2017a; Martinez et al., 2017; Kanazawa et al., 2018; Pavlakos et al., 2018; Sun et al., 2018; Ci et al., 2019; Zeng et al., 2020; Ci et al., 2020) or monocular video (Mehta et al., 2017; Hossain & Little, 2018; Pavllo et al., 2019; Kocabas et al., 2020). Other approaches utilize multi-camera systems for triangulation to improve visibility and eliminate ambiguity (Qiu et al., 2019; Jafarian et al., 2019; Pavlakos et al., 2017b; Dong et al., 2019; Lin & Lee, 2021b; Tu et al., 2020; Iskakov et al., 2019). However, these methods are often limited to indoor laboratory environments with fixed cameras. In contrast, our work proposes an active camera system with multiple mobile cameras for outdoor scenes, providing greater flexibility and adaptability.
Proactive Motion Capture Few previous works have studied proactive motion capture with a single mobile camera (Zhou et al., 2018; Cheng et al., 2018; Kiciroglu et al., 2019). In comparison, more works have studied the control of a multi-camera team. Among them, many are based on optimization with various system designs, including marker-based (Nägeli et al., 2018), RGBD-based (Xu et al., 2017), two-stage system (Saini et al., 2019; Tallamraju et al., 2019), hierarchical system (Ho et al., 2021), etc. It is important to note that all the above methods deal with static occlusion sources or clean landscapes. Additionally, the majority of these works adopt hand-crafted optimization objectives and some forms of fixed camera formations. These factors result in poor adaptability to dynamic scenes that are saturated with uncertainties. Recently, RL-based methods have received more attention due to their potential for dynamic formation adjustments. These works have studied active 3D HPE in the Gazebo simulation (Tallamraju et al., 2020) or Panoptic dome (Joo et al., 2015; Pirinen et al., 2019; Gärtner et al., 2020) for active view selection. Among them, AirCapRL (Tallamraju et al., 2020) shares similarities with our work. However, it is restricted to coordinating between two cameras in clean landscapes without occlusions. We study collaborations between multiple cameras (n ≥ 3) and resolve the credit assignment issue with our novel reward design (CTCR). Meanwhile, we study a more challenging scenario with multiple distracting humans serving as sources of dynamic occlusions, which requires more sophisticated algorithms to handle.
Multi-Camera Collaboration Many works in computer vision have studied multi-camera collaboration and designed active camera systems accordingly. Earlier works (Collins et al., 2003; Qureshi & Terzopoulos, 2007; Matsuyama et al., 2012) focused on developing a network of pan-tile-zoom (PTZ) cameras. Owing to recent advances in MARL algorithms (Lowe et al., 2017; Sunehag et al., 2017; Rashid et al., 2018; Wang et al., 2020; Yu et al., 2021; Jin et al., 2022), many works have formulated multi-camera collaboration as a multi-agent learning problem and solved it using MARL algorithms accordingly (Li et al., 2020; Xu et al., 2020; Wenhong et al., 2022; Fang et al., 2022; Sharma et al., 2022; Pan et al., 2022). However, most works focus on the target tracking problem, whereas this work attempts to solve the task of 3D HPE. Compared with the tracking task, 3D HPE has stricter criteria for optimal view selections due to correlations across multiple views, which necessitates intelligent collaboration between cameras. To our best knowledge, this work is the first to experiment with various camera agents (n ≥ 3) to learn multi-camera collaboration strategies for active 3D HPE.
3 PROACTIVE MULTI-CAMERA COLLABORATION
This section will explain the formulation of multi-camera collaboration in 3D HPE as a Dec-POMDP. Then, we will describe our proposed solutions for modelling the virtual environment’s complex dynamics and strengthening credit assignment in the multi-camera collaboration task.
3.1 PROBLEM FORMULATION
We formulate the multi-camera 3D HPE problem as a Decentralized Partially-Observable Markov Decision Process (Dec-POMDP), where each camera is considered as an agent which is decentralizedcontrolled and has partial observability over the environment. Formally, a Dec-POMDP is defined as ⟨S,O,An, n, P, r, γ⟩, where S denotes the global state space of the environment, including all human states and camera states in our problem. oi ∈ O denotes the agent i’s local observation, i.e., the RGB image observed by camera i. A denotes the action space of an agent and An represents the joint action space of all n agents. P : S × An → S is the transition probability function P (st+1|st,at), in which at ∈ An is a joint action by all n agents. At each timestep t, every agent obtains a local view oti from the environment s t and then preprocess oti to form i-th agent’s local
observation õti. The agent performs action a t i ∼ πti(·|õti) and receives its reward r(st,at). γ ∈ (0, 1] is the discount factor used to calculate the cumulative discounted reward G(t) = ∑
t′≥t γ t′−t r(t′).
In a cooperative team, the objective is to learn a group of decentralized policies {πi(ati|õti)} n i=1 that maximizes E(s,a)∼π[G(t)]. For convenience, we denote i as the agent index, JnK = {1, . . . , n} is the set of all n agents, and −i = JnK \ {i} are all n agents except agent i. Observation Camera agents have partial observability over the environment. The pre-processed observation õi = (pi, ξi, ξ−i) of the camera agent i consists of: (1) pi, a set of states of visible humans to agent i, containing information, including the detected human bounding-box in the 2D local view, the 3D positions and orientations of all visible humans measured in both local coordinate frame of camera i and world coordinates; (2) own camera pose ξi showing the camera’s position and orientation in world coordinates; (3) peer cameras poses ξ−i showing their positions and orientations in world coordinates and are obtained via multi-agent communication. Action Space The action space of each camera agent consists of the velocity of 3D egocentric translation (x, y, z) and the velocity of 2D pitch-yaw rotation (θ, ψ). To reduce the exploration space for state-action mapping, the agent’s action space is discretized into three levels across all five dimensions. At each timestep, the camera agent can move its position by [+δ, 0,−δ] in (x, y, z) directions and rotate about pitch-yaw axes by [+η, 0,−η] degrees. In our experiments, the camera’s pitch-yaw angles are controlled by a rule-based system.
3.2 FRAMEWORK
This section will describe the technical framework that constitutes our camera agents, which contains a Perception Module and a Controller Module. The Perception Module maps the original RGB images taken by the camera to numerical observations. The Controller Module takes these numerical observations and produces corresponding control signals. Fig. 2 illustrates this framework. Perception Module The perception module executes a procedure consisting of four sequential stages: (1) 2D HPE. The agent performs 2D human detection and pose estimation on the observed RGB image with the YOLOv3 (Redmon & Farhadi, 2018) detector and the HRNet-w32 (Sun et al., 2019) pose estimator, respectively. Both models are pre-trained on the COCO dataset, (Lin et al., 2014) and their parameters are kept frozen during policy learning of camera agents to ensure crossscene generalization. (2) Person ReID. A ReID model (Zhong et al., 2018c) is used to distinguish people in a scene. For simplicity, an appearance dictionary of all to-be-appeared people is built in advance following (Gärtner et al., 2020). At test time, the ReID network computes features for all detected people and identifies different people by comparing features to the pre-built appearance dictionary. (3) Multi-agent Communication. Detected 2D human pose, IDs, and own camera pose are broadcasted to other agents. (4) 3D HPE. 3D human pose is reconstructed via local triangulation after receiving communications from other agents. The estimated position and orientation of a person can then be extracted from the corresponding reconstructed human pose. The communication process is illustrated in Appendix Fig. 9.
Controller Module The controller module consists of a state encoder E and an actor network A. The state encoder E takes õti as input, encoding the temporal dynamics of the environment via LSTM. The future states of the target, pedestrians, and cameras are modelled using Mixture Density Network (MDN) (Bishop, 1994) to account for uncertainty. During model inference, it computes target position prediction, and then the (ϕ, µ, σ) parameters of target prediction MDN are used as a part of the inputs to the actor network to enhance feature encoding. Please refer to Section 3.4 for more details regarding training the MDN.
Feature Embedding zti = MLP(õ t i), (1)
Temporal Modeling hti = LSTM(z t i , h t−1 i ), (2)
Human Trajectory Prediction p̂t+1tgt/pd = MDN(z t i , h t i, p t tgt/pd), (3) Final Embedding eti = E ( õti, h t−1 i ) = Concat(z t i , h t i, {(ϕ, µ, σ)}MDNtgt ) , (4)
where ptgt and ppd refer to the state of the target and the pedestrian, respectively. The actor network A consists of 2 fully-connected layers that output the action, ati = A(E(õ t i, h t−1 i )).
3.3 REWARD STRUCTURE
To alleviate the credit assignment issue that arises in multi-camera collaboration, we propose the Collaborative Triangulation Contribution Reward (CTCR). We start by defining a base reward that reflects the reconstruction accuracy of the triangulated pose generated by the camera team. Then we explain how our CTCR is computed based on this base team reward.
Reconstruction Accuracy as a Team Reward To directly reflect the reconstruction accuracy, the reward function negatively correlates with the pose estimation error (Mean Per Joint Position Error, MPJPE) of the multi-camera triangulation. Formally,
r(X) = { 0, |X| ≤ 1, 1−Gemen (MPJPE(X)), |X| ≥ 2. (5)
Where the set X represents the participating cameras in triangulation, and employs the GemanMcClure smoothing function, Gemen(·) = 2(·/c) 2
(·/c)2+4 , to stabilize policy updates, where c = 50mm in our experiments. However, the shared team reward structure in our MAPPO baseline, where each camera in the entire camera team X receives a common reward r(X), presents a credit assignment challenge, especially when a camera is occluded, resulting in a reduced reward for all cameras. To address this issue, we propose a new approach called Collaborative Triangulation Contribution Reward (CTCR).
Collaborative Triangulation Contribution Reward (CTCR) CTCR computes each agent’s individual reward based on its marginal contribution to the collaborative multi-view triangulation. Refer to Fig. 3 for a rundown of computing CTCR for a 3-cameras team. The contribution of agent i can be measured by:
CTCR(i) = n · φr(i), φr(i) = ∑
S⊆JnK\{i}
|S|!(n− |S| − 1)! n! [r(S ∪ {i})− r(S)], (6)
Where n denotes the total number of agents. S denotes all the subsets of JnK not containing agent i. |S|!(n−|S|−1)!n! is the normalization term. [r(S ∪ {i}) − r(S)] means the marginal contribution of agent i. Note that ∑ i∈JnK φr(i) = r(JnK). We additionally multiply a constant n to rescale the CTCR to have the same scale as the team reward. Especially in the 2-cameras case, the individual CTCR should be equivalent to the team reward, i.e., CTCR(i = 1) = CTCR(i = 2) = r({1, 2}). For more explanations on CTCR, please refer to Appendix Section G.
3.4 LEARNING MULTI-CAMERA COLLABORATION VIA MARL
We employ the multi-agent learning variant of PPO (Schulman et al., 2017) called Multi-Agent PPO (MAPPO) (Yu et al., 2021) to learn the collaboration strategy. Alongside the RL loss, we jointly train the model with five auxiliary tasks that encourage comprehension of the world dynamics and the stochasticity in human behaviours. The pseudocode can be found in Appendix A.
World Dynamics Learning (WDL) We use the encoder’s hidden states (zti , hti) as the basis to model the world. Three WDL objectives correspond to modelling agent dynamics : (1) learning the forward dynamics of the camera P1(ξt+1i |zti , hti, ati), (2) prediction of team reward P2(rt|zti , hti, ati), (3) prediction of future position of peer agents P3(ξt+1−i |zti , hti, ati). Two WDL objectives correspond to modelling human dynamics: (4) prediction of future position of target person P4(pt+1tgt |zti , hti, pttgt), (5) prediction of future position of pedestrians, P5(pt+1pd |zti , hti, ptpd). All the probability functions above are approximated using Mixture Density Networks (MDNs) (Bishop, 1994).
Total Training Objectives LTrain = LRL + λWDL LWDL. The LRL is the reinforcement learning loss consisting of PPO-Clipped loss and centralized-critic network loss similar to MAPPO (Yu et al., 2021). LWDL = − 1n ∑ l λl ∑ i E[logPl(·|õti, hti, ati)] is the world dynamics learning loss that consists of MDN supervised losses on the five prediction tasks mentioned above.
4 EXPERIMENT
In this section, we first introduce our novel environment, UNREALPOSE, used for training and testing the learned policies. Then we compare our method with multi-passive-camera baselines and perform an ablation study on the effectiveness of the proposed CTCR and WDL objectives. Additionally, we evaluate the effectiveness of the learned policies by comparing them against other active multi-camera methods. Lastly, we test our method in four different scenarios to showcase its robustness.
4.1 UNREALPOSE: A VIRTUAL ENVIRONMENT FOR PROACTIVE HUMAN POSE ESTIMATION
We built four virtual environments for simulating active HPE in the wild using Unreal Engine 4 (UE4), which is a powerful 3D game engine that can provide real-time and photo-realistic renderings for making visually-stunning video games. The environments handle the interactions between realisticbehaving human crowds and camera agents. Here is a list of characteristics of UNREALPOSE that we would like to highlight: Realistic: Diverse generations of human trajectories, built-in collision avoidance, and several scenarios with different human appearance, terrain, and level of illumination. Flexibility: extensive configuration in numbers of humans, cameras, or their physical properties, more than 100 MoCap action sequences incorporated. RL-Ready: integrated with OpenAI Gym API, overhaul the communication module in the UnrealCV (Qiu et al., 2017) plugin with inter-process communication (IPC) mechanism. For more detailed descriptions, please refer to Appendix Section B.
4.2 EVALUATION METRICS
We use Mean Per Joint Position Error (MPJPE) as our primary evaluation metric, which measures the difference between the ground truth and the reconstructed 3D pose on a per-frame basis. However, using MPJPE alone may not provide a complete understanding of the robustness of a multi-camera collaboration policy for two reasons: Firstly, cameras adjusting their perception quality may take multiple frames to complete, and secondly, high peaks in MPJPE may be missed by the mean aggregation. To address this, we introduce the “success rate” metric, which evaluates the smooth execution and robustness of the learned policies. Success rate is calculated as the ratio of frames in an episode with MPJPE lower than τ . Formally, SuccessRate(τ) = P (MPJPE ≤ τ). This metric is a temporal measure that reflects the integrity of multi-view coordination. Poor coordination may cause partial occlusions or too many overlapping perceptions, leading to a significant increase in MPJPE and a subsequent decrease in the success rate.
4.3 RESULTS AND ANALYSIS
The learning-based control policies were trained in a total of 28 instances of the BlankEnv, where each instance uniformly contained 1 to 6 humans. Each training run consisted of 700,000 steps, which corresponds to 1,000 training iterations. To ensure a fair evaluation, we report the mean metrics based on the data from the latest 100 episodes, each comprising 500 steps. The experiments were conducted in a 10m× 10m area, where the cameras and humans interacted with each other.
Active vs. Passive To show the necessity of proactive camera control, we compare the active camera control methods with three passive methods, i.e., Fixed Camera, Fixed Camera (RANSAC), and Fixed Camera (PlaneSweepPose). “Fixed Cameras” denotes that the poses of the cameras are fixed, hanging 3m above ground and −35◦ camera pitch angles. The placements of these fixed cameras are carefully determined with strong priors, e.g., right-angle, triangle, square, and pentagon formations for 2, 3, 4, and 5 cameras, respectively. “RANSAC” denotes the method that uses RANSAC (Fischler & Bolles, 1981) for enhanced triangulation. “PlaneSweepPose” represents the off-the-shelf learning-based method (Lin & Lee, 2021b) for multi-view 3D HPE. Please refer to Appendix E.2 for more implementation details. We show the MPJPE and Success Rate versus a
different number of cameras in Fig. 5. We observe that all passive baselines are being outperformed by the active approaches due to their inability to adjust camera views against dynamic occlusions. The improvement of active approaches is especially significant when the number of cameras is less, i.e. when the camera system has little or no redundancy against occlusions. Notably, the MPJPE attained by our 3-camera policy is even lower than the MPJPE from 5 Fixed cameras. This suggests that proactive strategies can help reduce the number of cameras necessary for deployment.
The Effectiveness of CTCR and WDL We also perform ablation studies on the two proposed modules (CTCR and WDL) to analyze their benefits to performance. We take “MAPPO” as the active-camera baseline for comparison, which is our method but trained instead by a shared global reconstruction reward and without world dynamics modelling. Fig. 5 shows a consistent performance gap between the “MAPPO” baseline and our methods (MAPPO + CTCR + WDL). The proposed CTCR mitigates the credit assignment issue by computing the weighted marginal contribution of each camera. Also, CTCR promotes faster convergence. Training curves are shown in Appendix Fig. 13. Training with WDL objectives further improves the MPJPE metric for our 2-Camera model. However, its supporting effect gradually weakens with the increasing number of cameras. We argue that is caused by the more complex dynamics involved with more cameras simultaneously interacting in the same environment. Notably, we observe that the agents trained with WDL are of better generalization in unseen scenes, as shown in Fig. 6.
Versus Other Active Methods To show the effectiveness of the learned policies, we further compare our method with other active multi-camera formation control methods in 3-cameras BlankEnv. “MAPPO” (Yu et al., 2021) and AirCapRL (Tallamraju et al., 2020) are two learning-based methods based on PPO (Schulman et al., 2017). The main difference between these two methods is the reward shaping technique, i.e., AirCapRL additionally employs multiple carefully-designed rewards (Tallamraju et al., 2020) for learning. We also programmed a rule-based fixed formation control method (keeping an equilateral triangle, spread by 120◦) to track the target person. Results are shown in Table 1. Interestingly, these three baselines achieve comparable performance. Our method outperforms them, indicating a more effective multi-camera collaboration strategy for 3D HPE. For example, our method learns a spatially spread-out formation while automatically adjusting to avoid impending occlusion.
Generalize to Various Scenarios We train the control policies in BlankEnv while testing them in three other realistic environments (SchoolGym, UrbanStreet, and Wilderness) to evaluate their generalizability to unseen scenarios. Fig. 6 shows that our method consistently outperforms baseline
methods with lower variance in MPJPE during the evaluations in three test environments. We report the results in the BlankEnv as a reference.
Qualitative Analysis In Figure 7, we show six examples of the emergent formations of cameras under the trained policies using the proposed methods (CTCR + WDL). The camera agents learn to spread out and ascent above humans to avoid occlusions and collision. Their placements in an emergent formation are not assigned by other entities but rather determined by the decentralized control policies themselves based on local observations and agent-to-agent communication. For more vivid examples of emergent formations, please refer to the project website for the demo videos.1 For more analysis on the behaviour mode of our 3-, 4- and 5-camera models, please refer to Appendix Section H.
5 CONCLUSION AND DISCUSSION
To our knowledge, this paper presents the first proactive multi-camera system targeting the 3D reconstruction in a dynamic crowd. It is also the first study regarding proactive 3D HPE that promptly experimented with multi-camera collaborations at different scales and scenarios. We propose CTCR to alleviate the multi-agent credit assignment issue when the camera team scales up. We identified multiple auxiliary tasks that improve the representation learning of complex dynamics. As a final note, we release our virtual environments and the visualization tool to facilitate future research.
Limitations and Future Directions Admittedly, a couple of aspects of this work have room for improvement. Firstly, the camera agents received the positions for their teammates via non-disrupting broadcasts of information, which may be prone to packet losses during deployment. One idea is to incorporate a specialized protocol for multi-agent communication into our pipeline, such as ToM2C(Wang et al., 2022). Secondly, intended initially to reduce the communication bandwidth (for example, to eliminate the need for image transmission between cameras), the current pipeline comprises a Human Re-Identification module requiring a pre-scan memory on all to-be-appeared human subjects. For some out-of-distribution humans’ appearances, the current ReID module may not be able to recognize. Though ACTIVE3DPOSE can accommodate a more sophisticated ReID module (Deng et al., 2018) to resolve this shortcoming. Thirdly, the camera control policy requires accurate camera pose, which may need to develop a robust SLAM system (Schmuck & Chli, 2017; Zhong et al., 2018b) to work in dynamic environments with multiple cameras. Fourthly, the motion patterns of targets in virtual environments are based on manually designed animations, which will lead to the poor generalization of the agents on unseen motion patterns. In the future, we can enrich the diversity by incorporating a cooperative-competitive multi-agent game (Zhong et al., 2021) in training. Lastly, we assume near-perfect calibrations for a group of mobile cameras, which might be complicated to sustain in practice. Fortunately, we are seeing rising interest in parameter-free pose estimation (Gordon et al., 2022; Ci et al., 2022), which does not require online camera calibration and may help to resolve this limitation.
1Project Website for demo videos: https://sites.google.com/view/active3dpose
6 ETHICS STATEMENT
Our research into active 3D HPE technologies has the potential to bring many benefits, such as biomechanical analysis in sports and automated video-assisted coaching (AVAC). However, we recognize that these technologies can also be misused for repressive surveillance, leading to privacy infringements and human rights violations. We firmly condemn such malicious acts and advocate for the fair and responsible use of our virtual environment, UNREALPOSE, and all other 3D HPE technologies.
ACKNOWLEDGEMENT
The authors would like to thank Yuanfei Wang for discussions on world models; Tingyun Yan for his technical support on the first prototype of UNREALPOSE. This research was supported by MOST-2022ZD0114900, NSFC-62061136001, China National Post-doctoral Program for Innovative Talents (Grant No. BX2021008) and Qualcomm University Research Grant.
A TRAINING ALGORITHM PSUEDOCODE
Algorithm 1 Learning Multi-Camera Collaboration (CTCR + WDL) 1: Initialize: n agents with a tied-weights MAPPO policy π and mixture density models (MDN) for WDL
prediction models {(Pself, Preward, Ppeer, Ptgt, Ppd)}π , E parallel environment rollouts 2: for Iteration = 1, 2, . . .M do 3: In each E environment rollouts, each agent i ∈ JnK collects trajectory with length T :
τ = [ (õti, õ t+1 i , a t i, r t i , h t−1 i , s t,at−1, rt) ]T t=1
4: Substitute individual reward of each agent rt with CTCR: rti ← CTCR(i) (equation 6) 5: For each step in τ , compute advantage estimates Â1i , . . . , Â T i with GAE (Schulman et al., 2015) for
each agent i ∈ JnK 6: Yield a training batch D of the size E × T × n 7: for Mini-batch SGD Epoch = 1, 2, . . . ,K do 8: Sample a stochastic mini-batch of size B form D, which B = |D|/K 9: Compute zti and h t i using the encoder model Eπ
10: Compute PPO-CLIP objective loss LPPO, global critic value loss LValue, adaptive KL loss LKL 11: Compute the objectives that constitutes LWDL:
• Self-State Prediction Loss Lself = −Eτ [logP (ξt+1i |z t i , h t i, a t i)] • Reward Prediction Loss Lreward = −Eτ [logP (rt|zti , hti, ati)] • Peer-State Prediction Loss Lpeer = −Eτ [logP (ξt+1−i |z t i , h t i, a t i)] • Target Prediction Loss Ltgt = −Eτ [logP (pt+1tgt |zti , hti, pttgt)] • Pedestrians Prediction Loss Lpd = −Eτ [logP (pt+1pd |z t i , h t i, p t pd)]
12: LTrain = λPPOLPPO + λValueLValue + βKLLKL + λWDLLWDL 13: Optimize LTrain w.r.t to the current policy parameter θπ
B UNREALPOSE: ACCESSORIES AND MISCELLANEOUS ITEMS
Our UnrealPose virtual environment supports different active vision tasks, such as active human pose estimation and active tracking. This environment also supports various settings ranging from single-target single-camera settings to multi-target multi-camera settings. Here we provide a more detailed description of the three key characteristics of UnrealPose:
Realistic The built-in navigation system governs the collision avoidance movements of virtual humans against dynamic obstacles. In the meantime, it also ensures diverse generations of walking trajectories. These features enable the users to simulate a realistic-looking crowd exhibiting socially acceptable behaviors. We have also provided several pre-set scenes, e.g., school gym, wilderness, urban crossing, and etc. These scenes have notable differences in illuminations, terrains, and crowd appearances to reflect the dramatically different-looking scenarios in real-life.
Extensive Configuration The environment can be configured with different numbers of humans and cameras and swapped across other scenarios with ease, which we demonstrated in Fig 4(a-d). Besides simulating walking human crowds, the environment incorporates over 100 Mocap action sequences with smooth animation interpolations to enrich the data variety for other MoCap tasks.
RL-Ready We use the UnrealCV (Qiu et al., 2017) plugin as the medium to acquire images and annotations from the environment. The original UnrealCV plugin suffers from unstable data transfer and unexpected disconnections under high CPU workloads. To ensure fast and reliable data acquisition for large-scale MARL experiments, we overhauled the communication module in the UnrealCV plugin with inter-process communication (IPC) mechanism, which eliminates the aforementioned instabilities.
B.1 VISUALIZATION TOOL
We provide a visualization tool to facilitate per-frame analysis of the learned policy and reconstruction results (shown in Fig. 8). The main interface consists of four parts: (1) Live 2D views from all cameras. (2) 3D spatial view of camera positions and reconstructions. (3) Plot of statistics. (4) Frame control bar. This visualization tool supports different numbers of humans and cameras. Meanwhile, it is written in Python to support easy customization.
B.2 LICENSE
All assets used in the environment are commercially-available and obtained from the UE4 Marketplace. The environment and tools developed in this work are licensed under Apache License 2.0.
C OBSERVATION PROCESSING
Fig.9 shows the pipeline of the observation processing. Each camera observes an RGB image and detects the 2D human poses and IDs via the Perception Module described in the main paper. The camera pose, 2D human poses and IDs are then broadcast to other cameras for multi-view 3D triangulation. The human position is calculated as the median of the reconstructed joints. Human orientation is calculated from the cross product of the reconstructed shoulder and spine.
D IMPLEMENTATION DETAILS
D.1 TRAINING DETAILS
All control policies are trained in the BlackEnv scene. At the testing stage, we apply zeroshot transfer for the learned policies to three realistic scenes: SchoolGym, UrbanStreet, and Wilderness.
To simulate a dynamic human crowd that performs high random behaviors, we sample arbitrary goals for each human and employ the built-in navigation system to generate collision-free trajectories. Each human walks at a random speed. To ensure the generalization across a different number of humans, we train our RL policy with a mixture of environments of 1 to 6 humans. The learning rate is set to 5× 10−4 with scheduled decay during the training phase. The annealing schedule for the learning rate is detailed in Table. 2. The maximum episode length is 500 steps, discounted factor γ is 0.99 and the GAE horizon is 25 steps. Each sampling iteration produces a training batch with the size of 700 steps, then we perform 16 iterations on every training batch with 2 SGD mini-batch updates for each iteration (i.e. SGD batch size = 350).
Table 2 shows the common training hyper-parameters shared between the baseline models (MAPPO) and all of our methods. Table. 4 shows the hyperparameters for the WDL module.
D.2 DIMENSIONS OF FEATURE TENSORS IN THE CONTROLLER MODULE
Table 3 serves as a complementary description for Eqn. 1, 2, 3, 4. The table shows the dimensions of the feature tensors used in the controller module.“B” denotes the batch size. In the current model design, the dimension of local observation is adjusted based on the maximum number of camera agents (N_cammax) and the maximum number of observable humans (N_humanmax) in an environment. In our experiments, N_humanmax has been set to 7. The observation pre-processor will zero-pad each observation to a length equal N_humanmax × 18 if the current environment
instance has less than N_humanmax humans. “9” and “18” correspond to the feature length for a camera and a human, respectively. The MDN of the Target Prediction module has 16 Gaussian components, in which each component outputs (ϕ, µx, σx, µy, σy) and ϕ is the weight parameter of a component. The current implementation of MDN only predicts the x-y location of a human, which is a simplification since the z coordinate of a simulated human barely changes across an episode compared to the x and y coordinates. The dimension of an MDN output has a length of 80 and the exact prediction is produced by ϕ-weighted averaging. In Eqn. 4, {(ϕ, µ, σ)}MDNtgt is an encoded feature produced by passing the MDN output to a 2-layer MLP that has an output dimension of 128.
D.3 COMPUTATIONAL RESOURCES
We used 15 Ray workers for each experiment to ensure consistency in the training procedure. Each worker carries a Gym vectorized environment consisting of 4 actual environment instances. Each worker demands approximately 3.7GB of VRAM. We run each experiment with 8 NVIDIA RTX 2080 Ti GPUs. Depending on the number of camera agents, the total training hours required for an experiment to run 500k steps will vary between 4 to 12 hours.
D.4 TRAINING CURVE
Fig. 13 shows the training curves of our method and the baseline method. We can find that our methods converge faster and improve reconstruction accuracy than the baseline method.
D.5 TOTAL INFERENCE TIME
Our solution can run in real-time. Table 5 reports the inference time for each module of the proposed Active3DPose pipeline.
E ADDITIONAL EXPERIMENT RESULTS
E.1 ABLATION STUDY ON WDL OBJECTIVES
We perform a detailed ablation study regarding the effect of each WDL sub-task on the model performance. As shown in Fig. 11, we can observe that the MPJPE metric gradually decreases as we incorporate more WDL losses. This aligns with our assumptions that training the model with world dynamics learning objectives will promote the model’s ability to capture a better representation of future states, which in turn increases performance. Our method additionally demonstrates the importance of incorporating information regarding the target’s future state into the encoder’s output features. Predicting the target’s future states should not only be used as an auxiliary task but should also directly influence the inference process of the actor model.
E.2 BASELINE — FIXED-CAMERAS
In addition to the triangulation and RANSAC baselines introduced in the main text, we compare and elaborate on two more baselines that use fixed cameras: (1) fixed cameras with RANSAC-based triangulation and temporal smoothing (TS) (2) an off-shelf 3D pose estimator PlaneSweepPose (Lin & Lee, 2021a).
In the temporal smoothing baseline, we applied a low-pass filter (Casiez et al., 2012) and temporal fusion where the algorithm will fill in any missing key points in the current frame with the detected key points from the last frame.
In the PlaneSweep baseline, as per the official instructions, we train three separate models (3 cams to 5 cams) with the same camera setup as in our testing scenarios. We have tested the trained models in different scenarios and reported the MPJPE results in Table 6, 7, 8 and 9. Note that, this off-shelf pose estimator performs better than the Fixed-Camera Baseline (Triangulation) but still underperforms compared to our active method. Fig. 12 illustrates the formations of the fixed camera baselines.
Camera placements for all fixed-camera baselines are shown in Fig. 12. These formations are carefully designed not to disadvantage the fixed-camera baselines on purpose. Especially for the 5-cameras pentagon formation, which helps the fixed-camera baseline to obtain satisfactory performance on the 5-camera setting as shown in Tables 6, 7, 8 and 9.
E.3 OUR METHOD ENHANCED WITH RANSAC-BASED TRIANGULATION
RANSAC is a generic technique that can be used to improve triangulation performance. In this experiment, we also train and test our model with RANSAC. The final result (Table 11) shows a further improvement on our original triangulation version.
F GENERATING SAFE AND SMOOTH TRAJECTORIES
F.1 COLLISION AVOIDANCE
In order to generate safe trajectories, in this section, we introduce and evaluate two different ways to enforce collision avoidance between cameras and humans.
Obstacle Collision Avoidance (OCA) OCA resembles a feed-forward PID controller on the final control outputs before execution. Concretely, OCA adds a constant reverse quantity to the control output if it detects any surrounding objects within its safety range. This “detouring” mechanism safeguards the cameras from possible collisions and prevents them from making dangerous maneuvers.
Action-Masking (AM) AM also resembles a feed-forward controller but is instead embedded into the forward step of the deep-learning model. At each step, AM module first identifies the dangerous actions among all possible actions, then modifies the probabilities (output by the policy model) of choosing the hazardous actions to be zero so that the learning-based control policy will never pick them. Note that AM must be trained with the MARL policy model.
We proposed the minimum Camera-Human distance (in which the scope includes the target and the pedestrians) as the safety metric. It measures the distance between the closest human and the camera at a timestep. Fig. 14 shows the histograms of Min Camera-Human distance sampled over five episodes of 500 steps.
F.2 TRAJECTORY SMOOTHING
Exponential Moving Average (EMA) To generate smooth trajectories, we introduce EMA to smooth the outputs of our learned policy model. EMA is a common technique used in smoothing time-series data. In our case, the smoothing operator is defined as :
ât = ât−1 + η · (at − ât−1)
Where at is the action (or control signal) output by the model at the current step, ât−1 is the smoothed action from the last step and η is the smoothing factor. A smaller η results in greater smoothness. ât is the smoothed action that the camera will execute.
F.3 ROBUSTNESS OF THE LEARNED POLICY
In this section, we evaluate the robustness of our model on different external perturbations to the control signal. In conclusion, our model shows resilience against the delay effect and random noise.
Delay EMA also brings a delay effect while smoothing the generated trajectory. The level of smoothing positively correlates with a larger delay factor. Here, we evaluate our model’s robustness to the EMA simulated delay.
Random Action Noise Control devices in real-life are inevitably affected by errors. For example, the control signal of a controller may be over-damped or may overshoot. We simulated this type of random error by multiplying the output action by a uniformly-sampled noise.
In Table 12, we intend to observe the effects of EMA delay and random action noise on the reconstruction accuracy of our model, marked as “Vanilla”.
G MORE EXPLANATIONS ON CTCR
Figure 3 is an example of using Eq. 6 to compute CTCR for each of the three cameras. The CTCR is incentivized by the Shapley Value.The main idea is that the overall optimality needs to also account for the optimality of every possible sub-formation. For a camera agent to receive the highest CTCR possible, its current position and view must be optimal both in terms of its current formation and any sub-formation possible.
Note: a group of collaborating players is often referred to as a “coalition” in other literatures. Here we apply the same concept but to a group of cameras, so we used the more initiative term “formation” instead.
Eq. 6 can be further breakdown as follows:
φr(i) = ∑
S⊆JnK\{i}
|S|!(n− |S| − 1)! n! [r(S ∪ {i})− r(S)]
= 1
n ∑ S⊆JnK\{i} |S|!(n− 1− |S|)! (n− 1)! [r(S ∪ {i})− r(S)]
= 1
n ∑ S⊆JnK\{i} ( n− 1 |S| )−1 [r(S ∪ {i})− r(S)]
where JnK = 1, 2, . . . , n denotes the set of all cameras and the binomial coefficient ( k n ) =
n! k!(n−k)! , 0 ≤ k ≤ n. S denotes a formation (a subset) without camera i. ( n−1 |S| ) is the number of combinations of subset S (i.e., the binomial coefficient), which serves as a normalization term. r(S) computes the reconstruction accuracy of the formation S. And [r(S ∪ {i})− r(S)] computes the marginal improvement after adding the camera i to sub-formation S. So this equation means we iterate over all possible S and compute the marginal contribution of camera i and average over all possible combinations of (S, i).
Suppose we have a 3-cameras formation, similarly shown in Figure 3. So n = 3, the number of cameras. Let’s name these cameras (1,2,3), and let’s say we only care about Camera 1 for now. Since we are computing the average marginal contribution for Camera 1, we are looking for those formations that do not have Camera 1 because we want to see how much of an increase in performance resulted from the addition of Camera 1 to those formations. For all possible formations denoted by the set JnK, four formations satisfy this condition, S ⊆ JnK \ {i} −→ S ∈ (∅, {2}, {3}, {2, 3}). The binomial coefficient ( n−1 |S| ) for a 2-camera sub-formation in a 3-cameras case is ( 2 2 ) , which makes sense because there exists only one unique combination that does not contain Camera 1, which is sub-formation {2, 3}. r({2, 3}) computes the reconstruction accuracy of the formation {2, 3} and r({2, 3} ∪ {1}) computes the reconstruction accuracy after adding Camera 1 to sub-formation {2, 3}. Their difference gives us the marginal contribution of Camera 1. As we sum over all subsets S of JnK not containing Camera 1, and then divide by ( n−1 |S| ) and the number of cameras n, we have the average marginal contribution of Camera 1 (φr({1})) to the collaborative triangulated reconstruction. Further multiplying this term by n, you have the CTCR1 as shown in Eq. 6.
H ANALYSIS ON MODES OF BEHAVIORS OF THE TRAINED AGENTS
Here in Figure 15 we provide statistics and analysis on the behaviour mode of the agents controlled by our 3-, 4- and 5-camera policies, respectively. We are interested in understanding the characteristics of the emergent formations learned by our model. Hence we proposed three quantitative measures to understand the topology of the emergent formations: (1) the min-angle between the cameras’ orientation and the per-frame-mean of the min-camera angle, (2) the camera’s pitch angle, and (3) the camera-human distance. Hence we provide rigorous definitions as follows:
min-camera angle(i) = min j ̸=i
⟨axis of camera i, axis of camera j⟩
per-frame-mean of min-camera angle = 1
n ∑ i∈[n] min-camera angle(i)
In simpler terms, min-camera angle(i) finds the minimum angle between camera i and any other camera j. “per-frame-mean of min-camera angle” is the mean of min-camera angle(i) for all camera i in one frame. A positive camera’s pitch angle means looking upward, and a negative camera’s pitch angle means looking downward. The camera-human distance measures the distance between the target human and the given camera.
Regarding the distance between cameras and humans, the camera agents actively adjust their position relative to the target human. The cameras learn to keep a camera-human distance between 2m and 3m. It is neither too distant from the target human nor too close, violating the safety constraint. The camera agents surround the target at an appropriate distance to have a better resolution on the 2D views. In the meantime, as the safe-distance constraint is enforced during training, the camera agents are prohibited from aggressively minimizing the camera-human distance.
The histograms for the camera’s pitch angle suggest that the cameras mostly maintain negative pitch angles. Their preferred strategy is to hover over the humans and capture images at a higher altitude. This is likely because of emergent occlusion avoidance. The cameras also emerge to fly at an even level with the humans (where the pitch angle approximately equals 0) to capture more accurate 2D poses of the target human. This propensity is apparent from the peaks at x = 0 angles in the histograms. A relatively wide distribution of the pitch angle histogram suggests that the camera formation is spread out in the 3D space and dynamic adjustments of flying heights and pitch angles by the camera agents.
For the average angle between the cameras’ orientations, this statistic shows that the cameras in various formations will maintain reasonable non-zero spatial angles between each other. Therefore, their camera views are less likely to coincide and provide more diverse perspectives to generate a more reliable 3D triangulation. | 1. What is the focus and contribution of the paper regarding 3D multi-person human pose estimation?
2. What are the strengths and weaknesses of the proposed approach, particularly in terms of its performance and safety concerns?
3. Do you have any concerns or questions regarding the simulation environment and real-world applicability of the method?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper presents a proactive multi-camera collaboration method for 3D multi-person human pose estimation with unmanned aerial vehicles (UAVs). The method is based on multi-agent reinforcement learning that treats cameras as agents and utilizes a world dynamics model to improve the performance of the model. Specifically, by additionally predicting the dynamics of camera agents and humans, the paper is able to extract more useful features for the control of the camera agents. Inspired by prior work, the paper also proposes a collaborative triangulation contribution reward (CTCR) that uses the shapley value to improve the credit assignment of reconstruction quality to agents. The paper builds its own RL environment based on Unreal Engine 4 (UE4) where human crowds are simulated with collision avoidance in various scenes. Experiments in this environment demonstrate the superiority of the proposed approach over the baselines. Extensive ablation studies are also carried out to validate the design of the method.
Strengths And Weaknesses
Strength:
The problem the paper aims to solve is interesting and challenging, i.e. reconstructing dynamic human motion in the wild where occlusions often happen and static camera formations are not sufficient to address them.
Extensive experiments and ablation studies are done in the simulated environment, which shows the method outperforms the baselines and also validates the usefulness of the proposed techniques.
The paper also plans to open-source the UE4-based RL environment used in the method, which would be beneficial to future research on proactive multi-camera human pose estimation.
Weakness:
The method is not evaluated with safety metrics such as collision or interference with humans, which is rather important given the trajectory produced by the method could potentially harm the subjects.
The simulation does not consider the dynamics of real-world UAVs and many of the jittery trajectories produced by the method might not be realizable by UAVs.
A major concern of mine is that all the experiments are performed in simulation without real-world validation of the proposed approach. Given the method is potentially unsafe and the trajectories might be unrealizable, it is important to demonstrate its real-world applicability as prior work (e.g., AirCapRL) does.
Clarity, Quality, Novelty And Reproducibility
The paper is generally well-written and easy to read.
The paper has good conceptual and technical novelty since it addresses a challenging new task with some new techniques such as CTCR and world dynamics learning.
The paper would be hard to reproduce without a full-release of the training code and pre-trained model given the difficulty in reproducing multi-agent RL experiments |
ICLR | Title
Understanding Catastrophic Overfitting in Fast Adversarial Training From a Non-robust Feature Perspective
Abstract
To make adversarial training (AT) computationally efficient, FGSM AT has attracted significant attention. The fast speed, however, is achieved at the cost of catastrophic overfitting (CO), whose reason remains unclear. Prior works mainly study the phenomenon of a significant PGD accuracy (Acc) drop to understand CO while paying less attention to its FGSM Acc. We highlight an intriguing CO phenomenon that FGSM Acc is higher than accuracy on clean samples and attempt to apply non-robust feature (NRF) to understand it. Our investigation of CO by extending the existing NRF into fine-grained categorization suggests: there exists a certain type of NRF whose usefulness is increased after FGSM attack, and CO in FGSM AT can be seen as a dynamic process of learning such NRF. Therefore, the key to preventing CO lies in reducing its usefulness under FGSM AT, which sheds new light on understanding the success of a SOTA technique for mitigating CO.
1 INTRODUCTION
Despite impressive performance, deep neural networks (DNNs) (LeCun et al., 2015; He et al., 2016; Huang et al., 2017; Zhang et al., 2019a; 2021) are widely recognized to be vulnerable to adversarial examples (Szegedy et al., 2013; Biggio et al., 2013; Akhtar & Mian, 2018). Without giving a false sense of robustness against adversarial attacks (Carlini & Wagner, 2017; Athalye et al., 2018; Croce & Hein, 2020), adversarial training (AT) (Madry et al., 2018; Zhang et al., 2019c) has become the de facto standard approach for obtaining an adversarially robust model via solving a min-max problem in two-step manner. Specifically, it first generates adversarial examples by maximizing the loss, then trains the model on the generated adversarial examples by minimizing the loss. PGD-N AT (Madry et al., 2018; Zhang et al., 2019c) is a classical AT method, where N is the iteration steps when generating the adversarial samples in inner maximization. Notably, PGD-N AT is N times slower than its counterpart standard training with clean samples. A straightforward approach to make AT faster is to set N to 1, i.e reducing the attack in the inner maximization from multi-step PGD to single-step FGSM (Goodfellow et al., 2015). For simplicity, PGD-based AT and FGSM-based fast AT are termed PGD AT and FGSM AT, respectively.
FGSM AT often fails with a sudden robustness drop against PGD attack while maintaining its robustness against FGSM attack, which is called catastrophic overfitting (CO) (Wong et al., 2020). With Standard Acc denoting the accuracy on clean samples while FGSM Acc and PGD Acc indicating the accuracy under FGSM and PGD attack, we emphasize that a CO model is characterized by two main phenomena as follows.
• Phenomenon 1: The PGD Acc drops to a value close to zero when CO happens (Wong et al., 2020; Andriushchenko & Flammarion, 2020).
• Phenomenon 2: FGSM Acc is higher than Standard Acc for a CO model (Kim et al., 2020; Andriushchenko & Flammarion, 2020).
Multiple works (Wong et al., 2020; Kim et al., 2020; Andriushchenko & Flammarion, 2020) have focused on understanding CO by explaining the drop of PGD Acc in Phenomenon 1; however, they pay less attention to Phenomenon 2 regarding FGSM Acc. Specifically for Phenomenon 1, FGSM-RS (Wong et al., 2020) attributes it to the lack of perturbation diversity in FGSM AT, which
is refuted by a follow-up GradAlign (Andriushchenko & Flammarion, 2020) by demonstrating a co-occurrence of local non-linearity and the PGD Acc drop. However, these understandings cannot explain why FGSM Acc is higher than Standard Acc for a CO model in Phenomenon 2.
In the context of adversarial learning, numerous works (Goodfellow et al., 2015; Tabacof & Valle, 2016; Tanay & Griffin, 2016; Koh & Liang, 2017; Nakkiran, 2019; Athalye et al., 2018; Zhang et al., 2020) have attempted to explain why adversarial examples exist from different angles, among which non-robust feature (NRF) (Ilyas et al., 2019) is a popular one which also aligns well with all other explanations (Goodfellow et al., 2015; Tabacof & Valle, 2016; Tanay & Griffin, 2016; Koh & Liang, 2017; Nakkiran, 2019; Athalye et al., 2018). Such compatibility suggests that the NRF perspective constitutes an essential tool for understanding adversarial vulnerability, to which CO is also directly related. Specifically, the authors of (Ilyas et al., 2019) define the positive-correlation between features and true labels as feature usefulness (see Section 3.1 for more detailed definitions). Therefore, the adversarial vulnerability of DNNs is attributed to the existence of non-robust features (NRFs), which can be made anti-correlated with the true label under adversary. This understanding of NRFs in (Ilyas et al., 2019) well aligns with the fact that a CO model achieves close to zero robustness against PGD attack, and thus motivates us to believe that the NRF perspective might be an auspicious direction for understanding CO in FGSM AT.
The NRF in (Ilyas et al., 2019) is defined with PGD attack, which is followed in this work; however, we extend their NRF framework by additionally considering FGSM attack for fine-grained categorization. Considering the difference of adversarial attack strength between FGSM and PGD attack, GradAlign (Andriushchenko & Flammarion, 2020) explains Phenomenon 1 by demonstrating how well the attack variant (FGSM or PGD attack) can solve the inner maximization problem in AT. We start our investigation by providing an alternative interpretation of this adversarial strength difference between the two attack variants within the NRF framework (Ilyas et al., 2019), named strength-based NRF categorization. Despite aligning well with Phenomenon 1, We find that this strength-based categorization cannot explain Phenomenon 2 since the usefulness of these NRFs is decreased under FGSM attack and leads to an decrease (instead of increase in Phenomenon 2) of classification accuracy on FGSM adversarial examples than clean samples.
To understand Phenomenon 2 in CO from the NRF perspective, we conjecture that there exists a type of NRF whose usefulness is increased under FGSM attack, thus can lead to a higher FGSM Acc than Standard Acc (Phenomenon 2). In other words, if such type of NRFs (NRF2 in the following categorization) exists, Phenomenon 2 can be justified. Considering whether the usefulness is decreased or increased under FGSM attack, we propose a direction-based NRF categorization where NRF2 (NRF1) leads to the increase (decrease) of classification accuracy under FGSM attack. To prove the existence of NRF2, we follow the procedure of verifying the existence of NRF in (Ilyas et al., 2019). Moreover, we show that NRF2 can cause a significant PGD Acc drop , which also helps justify Phenomenon 1 in CO.
Overall, towards understanding CO in FGSM AT, our contributions are summarized as follows:
• Our work shifts the previous focus on PGD Acc in Phenomenon 1 to FGSM Acc in Phenomenon 2 for understanding CO. Given NRF as a popular perspective on adversarial vulnerability, we are the first to attempt at applying it to explain Phenomenon 2.
• We extend the existing NRF framework under PGD attack (Ilyas et al., 2019) to more fine-grained NRF categorization by FGSM attack. We verify the existence of NRF2 and show that its existence well justifies Phenomenon 2 (as well as Phenomenon 1).
• Very recent works show that adding noise on the image input achieves SOTA performance for FGSM AT. However, their mechanism of such a simple technique preventing CO remains not fully clear, for which our NRF2 perspective shed new light on its success.
2 PROBLEM OVERVIEW AND RELATED WORK
2.1 FGSM AT AND EXPERIMENTAL SETUPS
Let D denote a data distribution with (x, y) pairs and f(·, θ) parameterized by θ denote a deep model. For standard training, the model f(·, θ) is trained on D by minimizing E(x,y)∼D[l(f(x, θ), y)], where l indicates a cross-entropy loss for a typical multi-class classification task. Adversarial training
(AT) (Madry et al., 2018) for obtaining a robust model is formalized as a min-max optimization problem:
argmin θ E(x,y)∼D [ max δ∈S l(f(x+ δ; θ), y) ] , (1)
where S is a perturbation limitation (ϵ with the l∞ constraint in this work). The outer minimization problem in AT is often the same as standard training; however, AT has an unique inner maximization problem that seeks a perturbation inside the S for maximizing the optimization loss. PGD AT and FGSM AT are two typical adversarial training methods with PGD attack and FGSM attack solving the inner maximization problem, respectively.
Experimental setups. Unless specified, we follow the settings in GradAlign (Andriushchenko & Flammarion, 2020) during training and evaluation. The experiments are conducted on CIFAR10 with PreAct ResNet-18, trained for 30 epochs with cyclic learning rates and half-precision training. We adopt SGD optimizer with weight decay 5 × 10−4, and the maximum learning rate is set to 0.2. ℓ∞ attack with perturbation constraint ϵ=8/255 is applied in both training and evaluation. Following (Wong et al., 2020; Andriushchenko & Flammarion, 2020), we calculate the Standard accuracy (Standard Acc) on clean samples, FGSM accuracy (FGSM Acc), and PGD accuracy (PGD Acc) under PGD-50-10 attack (performing PGD-50 attack with ten restarts and step size α = ϵ/4) for evaluation.
2.2 CATASTROPHIC OVERFITTING IN FGSM AT
What are the CO Phenomena? Notably, a model trained only on adversarial examples generated by FGSM attack in FGSM AT still has robustness against PGD attack. In practice, this robustness is only slightly lower than that of much more computationally expensive PGD AT. However, this robustness level can often not be maintained till the end of training as a classical PGD AT. Specifically, as the FGSM AT evolves, the model robustness against PGD attack first increases but then enters a phase where the robustness quickly drops to and stays at zero. Following (Wong et al., 2020), this phase is termed catastrophic overfitting (CO). Another intriguing phenomenon related to CO is that for a model at the phase of CO, it achieves a higher FGSM Acc than Standard Acc (Kim et al., 2020; Andriushchenko & Flammarion, 2020). We term these two phenomena regarding CO as Phenomenon 1 and Phenomenon 2 respectively, as in Section 1.
How to explain the CO phenomena? With the finding that random initialization of perturbation helps alleviate CO (Wong et al., 2020), a tempting explanation suggests that the CO in FGSM AT lies in the lack of perturbation diversity, which has been refuted by (Andriushchenko & Flammarion, 2020). Instead, it attributes the reason for the PGD Acc drop to local non-linearity, which is quantified by the gradient alignment: cos(∇xℓ(x, y; θ),∇xℓ(x+η, y; θ)). The local non-linearity (low gradient alignment) indicates a low linear approximation quality of FGSM perturbations to PGD perturbations. In other words, local non-linearity means that the inner maximization problem in Eq 1 cannot be solved accurately by FGSM. It is demonstrated in (Andriushchenko & Flammarion, 2020) that local linearity decreases significantly when CO happens in FGSM AT. Their perspective is mainly dependent on the co-occurrence between non-linearity and the drop of PGD Acc. In other words, the non-linearity perspective exclusively focuses on explaining Phenomenon 1, for which this work provides an alternative NRF explanation (see Section 3). More importantly, our work fills the gap to explain Phenomenon 2 from a NRF perspective (see Section 4).
How to prevent CO? With the focus on Phenomenon 1, numerous works have attempted to prevent CO. Fast AT (Wong et al., 2020) is the first to show FGSM AT can achieve comparable robustness as PGD AT of “free" variants (Shafahi et al., 2019; Zhang et al., 2019b). A follow-up work (Andriushchenko & Flammarion, 2020) shows that CO still occurs in (Wong et al., 2020) when the step size increases and introduces a regularization loss (GradAlign) for maximizing local linearity to avoid CO. Other successful attempts for avoiding CO include adaptive perturbation size (Kim et al., 2020), dynamic dropout scheduling (Vivek & Babu, 2020) and detection-based alternating strategy (Li et al., 2020). Intriguingly, very recent works (Zhang et al., 2022; de Jorge et al., 2022) have shown that adding noise on the image input is sufficient for preventing collapse and achieves SOTA performance. However, the reason for its success remains not fully clear, for which our NRF perspective with direction-based categorization provides an explanation (see Section 5).
3 NON-ROBUST FEATURE PERSPECTIVE ON ADVERSARIAL TRAINING
Before investigating CO from the NRF perspective, we first revisit the definition and methodology of robust and non-robust features defined in (Ilyas et al., 2019) (Fig. 1(a)). Considering the difference of attack strength between FGSM attack and PGD attack, we extend the non-robust features defined in (Ilyas et al., 2019) to a fine-grained categorization under FGSM attack (strength-based categorization in Fig. 1(b)) and discuss its relationship with CO phenomena.
3.1 BACKGROUND ON FEATURE USEFULNESS AND ROBUSTNESS
Here, we revisit the definitions and methodology of DNN features introduced in (Ilyas et al., 2019). According to (Ilyas et al., 2019), a feature is defined as a function mapping from the input space X to real numbers, i.e f : X → IR, where IR can be the label space in classification task. Therefore, a DNN classifier can be perceived as a function utilizing a set of useful features for label prediction (Ilyas et al., 2019), where useful features in (Ilyas et al., 2019) are characterized by their positive correlation with true label, defined as:
• ρ-useful features: A feature f is ρ-useful (ρ > 0) if it is correlated with the true label in expectation, shown as follows:
IE(x,y)∼D[y · f(x)] ≥ ρ. (2)
To understand adversarial vulnerability, (Ilyas et al., 2019) further proposes to dichotomize the above useful features into robust features (RFs) and non-robust features (NRFs), defined as follows:
• Robust feature (RFs): a useful feature f is robust if there exists a γ > 0 for it to be γ-robustly useful under some specified set of valid perturbations ∆, shown as follows:
IE(x,y)∼D[ inf δ∈∆(x)
y · f(x+ δ)] ≥ γ. (3)
• Non-robust feature (NRFs): a useful feature f is non-robust if γ > 0 does not exist.
Adversarial vulnerability can be attributed to the existence of NRFs (Ilyas et al., 2019). As discussed in (Ilyas et al., 2019), adversarial vulnerability is caused by the presence of NRFs which are useful and predictive. According to (Ilyas et al., 2019), “in the presence of an adversary, any useful but non-robust features can be made anti-correlated with the true label, leading to adversarial vulnerability" (Ilyas et al., 2019). Therefore, adversarial training obtains a robust model by discouraging from learning NRFs. In practice, finding a worst-case perturbation under a certain budget for Eq. 3is not feasible since it is often an NP-hard problem (Katz et al., 2017; Weng et al., 2018), and thus (Ilyas et al., 2019) uses multi-step PGD attack to approximate such a worst-case solution when investigating NRFs.
Fig. 1 (a) summarizes the feature definition in (Ilyas et al., 2019). Specifically, the plus sign (+) indicates the useful features which has positive correlation with true labels, while the minus sign (−) indicates anti-correlated features under PGD attack.
Verifying the existence of NRFs (Ilyas et al., 2019). The procedure verifying the existence of NRFs in (Ilyas et al., 2019) is summarized in Fig. 2(a) by three steps. At Step 1, it trains a model M1 with standard training on the original training set (Xtrain, y), where Xtrain and y indicate the training sample and its corresponding true label, respectively. At Step 2, it first randomly picks a random label yrand for each training sample to ensure that the training set Xtrain has no features with a positive correlation with the random label yrand. After that, perturbation δ is generated by PGD attack on M1 by making sample prediction f(x+ δ) close to yrand. This step aims to generate a perturbation δ
which includes NRFs related to yrand. At Step 3, model M2 is trained on the new dataset (Xtrain+δ, yrand) generated at Step 2, and then evaluated on the original test dataset with true labels (Xtest, y). According to (Ilyas et al., 2019), the perturbation δ is the only connection between Xtrain + δ and yrand since there is no positive correlation between Xtrain and yrand. Therefore, if model M2 achieves higher accuracy than random prediction (e.g 10% for CIFAR10) on the original test dataset (Xtest, y) with true labels, the existence of NRFs in δ is verified. We re-implement this experiment in (Ilyas et al., 2019), and M2 achieves a accuracy of 48.16% (with five independent runs), as shown in Table 1, which verifies the existence of NRFs as in (Ilyas et al., 2019).
3.2 STRENGTH-BASED NRF CATEGORIZATION
It is widely known that PGD attack is stronger than FGSM attack, which is supported by the finding that FGSM Acc is higher than PGD Acc under the same l∞ perturbation budget (Madry et al., 2018). Thus, PGD Acc is often adopted as a common metric to evaluate the model robustness. FGSM AT is faster than PGD AT but at the cost of a mildly lower PGD Acc (than PGD AT) even when CO does not happen in FGSM AT. When CO occurs, the PGD Acc drops to a value close to zero (Phenomenon 1). Since the difference between PGD AT and FGSM AT lies in the attack variant, GradAlign (Andriushchenko & Flammarion, 2020) explains their difference based on how well the adopted attack can solve the inner maximization problem. Specifically, FGSM AT yields lower robustness because FGSM attack cannot solve the problem as accurately as PGD attack because PGD attack is stronger than its FGSM counterpart. The following discussion provides an alternative interpretation of the attack strength-based explanation in (Andriushchenko & Flammarion, 2020) from the NRF perspective.
Intuitive categorization. Considering the attack strength difference, the NRFs can be divided into two types, as shown in Figure 1(b). The first type of NRFs is named as double non-robust feature (DNRF) since it can be made anti-correlated with the true labels by both FGSM and PGD attack. The existence of DNRF explains why FGSM AT yields a more robust model than standard training against PGD attack during evaluation. By contrast, the other type of NRFs is called single non-robust feature (SNRF) since it is made anti-correlated with true labels by PGD attack but is still positive-correlated with true labels under FGSM attack.
Experimental verification of DNRF. This setup follows the procedure in (Ilyas et al., 2019) (Fig. 2(a)) with a small modification. With its definition, DNRF has the property of being made anti-correlated with true labels under both PGD attack and FGSM attack. Therefore, the existence of DNRF ensures that the test acc will
also be higher than random guess (10% for CIFAR10) if we replace the PGD attack at Step 2 with FGSM attack, as shown in Fig. 2(b). This is confirmed by an accuracy of 20.01% on the original test set, see Table 1.
On SNRF and its relationship with CO phenomena. It is challenging to directly verify the existence of SNRF. The phenomenon that 20.01% (FGSM attack) is lower than 48.61% (PGD attack) in Table 1 can be seen as an indirect evidence for the existence of SNRFs which can be extracted by PGD attack but not FGSM attack. Even though direct empirical verification of SNRF is challenging, its theoretical existence is straightforward as long as FGSM attack is weaker than PGD attack. Moreover, the weaker the FGSM attack (compared with PGD attack), the more SNRF. FGSM AT cannot effectively discourage the model from learning SNRF as PGD AT, and thus we can attribute the lower PGD Acc of FGSM AT than PGD AT to the existence of SNRF. However, SNRF under strength-based NRF categorization might (at most) partly explain CO Phenomenon 1 but cannot justify CO Phenomenon 2. The reason is that the model might have a very low PGD Acc, but FGSM Acc cannot be higher than Standard Acc even in an extreme case when all the NRFs become SNRF due to a very weak FGSM attack. The following section introduces a new NRF categorization to better explain CO phenomena, especially Phenomenon 2.
4 DIRECTION-BASED NRF CATEGORIZATION FOR UNDERSTANDING CO
PHENOMENA
Direction-based NRF categorization. Similar to the above strength-based NRF categorization, the categorization here considers FGSM attack but differs by a key assumption: whether the usefulness of certain NRFs is decreased or increased under FGSM attack. We call this NRF categorization as direction-based, which is defined as follows:
• NRF1: NRF1 is a type of NRF whose usefulness is decreased after FGSM attack, thus can be exploited by FGSM attack to decrease the classification accuracy after FGSM attack.
• NRF2: NRF2 is a type of NRF whose usefulness is increased after FGSM attack, thus can be exploited by FGSM attack to increase the classification accuracy after FGSM attack.
NRF1 and NRF2 still follow the definition of NRF regarding PGD attack. In other words, the usefulness of both NRF1 and NRF2 is decreased after PGD attack. The change of their usefulness after attacks is summarized in Fig. 3, where the increase and decrease of feature usefulness are denoted by the ↑ and ↓, respectively.
4.1 ON NRF2 EXISTENCE AND ITS EXPLANATION FOR PHENOMENON 2
When we discuss DNRF and SNRF in Section 3.2, by default, we assume that their usefulness is decreased after FGSM attack, and thus they can be seen as NRF1. In other words, the existence of NRF1 is straightforward; however, it is unclear whether NRF2 actually exists.
Conjecture 1: We conjecture that there exists NRF2, and the FGSM attack in AT encourages the model to learn NRF2.
Differences between verifying NRF1 and NRF2. The experimental procedure of verifying NRF2 is shown in Fig. 2(c). The key reason why procedures in Fig. 2 can verify the existence of certain NRFs is that the generated perturbation δ is the only connection between Xtrain+ δ and yrand, and it should include certain NRFs related to yrand. In other words, the usefulness of certain NRFs should be increased after attack at Step 2 of Fig. 2. For FGSM attack, f(x+ δ) is optimized to be far from the true label y by maximizing the loss l(f(x+ δ), y), and the usefulness of NRF1 and NRF2 are decreased and increased by definition, respectively(see Fig. 3). Therefore, to increase the usefulness of NRF1 at Step 2, the optimization goal should be close to yrand, as shown in Fig. 2(b). By contrast, to verify the existence of NRF2, the optimization goal at Step 2 should follow that of FGSM attack, i.e far from yrand, as shown in Fig. 2(c), which increases the usefulness of NRF2.
Verification of Conjecture 1. As discussed above, verifying the existence of NRF2 requires an opposite optimization goal with that of NRF1 at Step 2 (see Fig. 2(c)). For the model M1 at Step 1, we adopt FGSM AT with the results reported in Table 2. When M1 at Step 1 is set to a CO model with FGSM AT, our model M2 evaluated on the original test set achieves an accuracy of around
17.52%± 1.59% (with five independent runs), which verifies the existence of NRF2 since it is higher than 10% (random prediction for CIFAR10). Given that the usefulness of NRF2 is increased after FGSM attack, the existence of NRF2 in a model after CO justifies why FGSM Acc can be higher than Standard Acc (CO Phenomenon 2). Moreover, we experiment with setting the M1 at Step 1 to one with standard training, the Test Acc of M2 is close to random prediction, suggesting that it is the FGSM attack in AT that encourages the model to learn NRF2.
Does NRF2 exist in a model before CO in FGSM AT? It is interesting to investigate whether NRF2 exists for the FGSM AT model before CO. To this end, we set M1 at Step 1 of Fig. 2(c) to a FGSM AT model saved before CO, which yields an M2 with an accuracy close to random prediction(see FGSM AT(Before CO) in Table 2). This indicates that NRF2 mainly exists in the model after CO in FGSM AT, which further confirms the relationship between CO and NRF2.
Can NRF2 be exploited by FGSM attack to decrease FGSM accuracy? (Kim et al., 2020) reports that FGSM Acc is higher than Standard Acc when CO happens in FGSM AT. Here, we show that this is not always the case if we evaluate FGSM Acc of the CO model with different step sizes, as shown in Table 3. Note that the result with step size of zero indicates the Standard Acc. We find that FGSM Acc is higher (lower) than Standard Acc when the step size is relatively large (small). The results suggest that NRF2 can still be exploited by an FGSM attack with a smaller step size to be anti-correlated with the true label. In other words, step size plays a non-trivial role when FGSM attack exploiting NRF2. The above results well explain why CO only occurs when the step size is set to a relatively large value (Wong et al., 2020).
Table 3: Evaluate FGSM Acc of CO model under different step sizes.
step size (/255) 0 1 2 3 4 5 6 7 8
FGSM Acc (%) 85.77 37.27 41.57 66.78 86.93 95.16 96.94 96.63 94.72
A dynamic view on the CO from the
NRF2 perspective. Prior works analyzing CO mainly focus on Phenomenon 1 about low PGD Acc, which seems to be a pseudostatic state since the PGD Acc stays at zero after CO. Here, we investigate CO model further by analyzing FGSM Acc at different epochs, as shown in Figure 4. Fig. 4 show that the FGSM Acc under large step sizes consistently gets higher with more
training epochs, suggesting the model continues to rely more on NRF2. In other words, CO can be perceived as a dynamic state of learning NRF2, which does not stop after the drop of PGD Acc. This is reasonable because NRF2 can be very useful features under FGSM attack.
4.2 CAN NRF2 ALSO JUSTIFY PHENOMENON 1?
The above analysis verifies the existence of NRF2 in a CO model, which well justifies the improved accuracy after FGSM attack (Phenomenon 2). Here, we discuss whether it can be used to justify Phenomenon 1. Regarding the relationship between NRF2 and PGD Acc, we formulate the following conjecture.
Conjecture 2: We conjecture that NRF2 can be a cause of a significant PGD Acc drop.
Verification of Conjecture 2. To verify Conjecture 2, we finetune a robust model on a training dataset with and without such NRF2, respectively, and evaluate the PGD Acc on the original test set with true labels (Xtest, y). To minimize the influence of other NRF types, we adopt a model pretrained by PGD AT, which mainly has RFs, for the finetuning experiment. Specifically, we adopt the generated new training dataset (X + δ, yrand) at Step 2 of Fig. 2(c) as the one with NRF2. For the counterpart dataset without NRF2, we remove the added perturbation δ, and thus (X , yrand) is
used for training. The basic loss is set to cross-entropy (CE) to encourage learning the features, if any, in the generated dataset. However, the accuracy will quickly reduce to zero due to the random choice of yrand. Thus, a KL loss, which encourages the output of the finetuned model to be close to that the pretrained mode, is added on top of the CE loss to encourage the model in the finetuning process to maintain the original RFs. The total loss is shown as follows:
Lossfinetune = CE(f(x+ δ; θ), yrand) + λ ∗KL(f(x+ δ; θpretrain), f(x+ δ; θ)), (4)
where f(x; θpretrain) and f(x; θ) indicate the pretrained PGD model and finetuning model, respectively. A detailed setup for this experiment is reported in the Appendix. The results with λ set to 5 are shown in Fig. 5. We observe that the PGD Acc can be maintained around 25% after 30 epochs of finetuning for the dataset (X , yrand) which contains no features. By contrast, under the same setting, the PGD Acc quickly decreases to a value close to zero for the generated dataset (X + δ, yrand) which contains NRF2. The contrasting results verify the claim in Conjecture 2.
Additional results with other λ values in Equation 4 are report in Fig. 6. As λ gets larger, the model finetuned on (X , yrand) maintains more RFs learned in pretrained weights θpretrain, leading to an increase in accuracy. However, the PGD Acc for the model finetuned on (X + δ, yrand) (with NRF2) is zero for a wide range of λ values, which is much lower than the model finetuned on (X , yrand) (without NRF2). This further verifies the claim in Conjecture 2. Interestingly, the result in Fig. 6 can also be viewed as another proof for Conjeture 1. The Standard Acc, evaluated on the original test set (Xtest, y), is higher for the model finetuned on (X + δ, yrand) is higher than its counterpart on (X , yrand) for all λ in Fig. 6(b). This finding aligns with Conjecture 1 that there exists a type of NRF, which can be encouraged under FGSM attack.
Discussion on the drop speed of PGD Acc from the NRF2 perspective. As demonstrated in Section 4.1, CO can be perceived as a dynamic process of learning NRF2. With this insight, learning NRF2 in FGSM AT does not occur suddenly, which is supported by the finding that FGSM Acc still increases even after CO happens. If this is the case, how can we justify the sudden drop of PGD Acc within one epoch? At first sight, it seems that NRF2 can only explain the PGD Acc drop but not its drop speed. However, we argue that the sudden drop of PGD Acc is due to the worst-case property of PGD attack. Note that PGD attack seeks the most effective adversarial perturbation with multiple iterations to fool the model by exploiting the most vulnerable features in the model. In other words, the model is already vulnerable to PGD attack even if it only learns a small amount of NRF2 (one epoch regarding CO, for instance). After the PGD Acc drops to zero, the model continues to learn more NRF2, leading to a higher FGSM Acc.
5 NRF2 HELPS EXPLAIN HOW SOTA METHODS PREVENT CO
A recent work (Zhang et al., 2022) outperforms prior methods in FGSM AT by a large margin without additional computation overhead. Specifically, it shows that adding noise to the input (instead of initializing the adversarial perturbation with noise as in (Wong et al., 2020)) is critical for its success (Zhang et al., 2022). A similar finding has also been reported in another recent work (de Jorge
et al., 2022). However, why such a simple technique of adding noise on the images is so effective remains not fully clear. Here, we show that NRF2 sheds new light on their success.
Intuitively, the model tends to learn those features that are useful for prediction. Therefore, PGD AT mainly learns RFs because NRFs are not useful under PGD attack. With FGSM AT, the model is encouraged to learn NRF2 because FGSM attack increases its usefulness. Moreover, with our analysis in Section 4, CO can be seen as a dynamic process of learning NRF2. Therefore, the key to preventing CO in FGSM AT lies in decreasing the NRF2 usefulness under FGSM attack. Regarding why adding noise to the image input prevents CO, we establish the following hypothesis.
Conjecture 3: We conjecture that adding noise to the input decreases the usefulness of NRF2 under FGSM attack (indicated by FGSM Acc).
Verification of Conjecture 3. For facilitating the discussion, we divide all types of features into NRF2 and non-NRF2. A CO model has both NRF2 and non-NRF2, while a non-CO model mainly has non-NRF2. We evaluate the performance on a model without or with random noise added to the input and calculate the noise-induced change of Standard Acc and FGSM Acc (Table 4). Note that for FGSM Acc with noise, the noise is added to the input before the FGSM attack following (Zhang et al., 2022). For the model before CO, the noise has almost the same influence on the change of Standard Acc and FGSM Acc, i.e ▽SA of −0.70% (Standard Acc change) is close to
▽FGSM of −1.30%. We further conduct the same experiment on a CO model. Before adding noise, the FGSM Acc (94.67%) is higher than its standard Acc (85.77%), which can be attributed to NRF2 as in Conjecture 1. After adding noise, this trend is reversed (57.37% < 84.55%), suggesting Phenomenon 2 disappears in this setup. Moreover, ▽FGSM (−37.30%) is much more significant than ▽SA (−1.22%). Such a significant drop of FGSM Acc (▽FGSM ) on a CO model (with NRF2) suggests that the NRF2 usefulness under FGSM attack is significantly decreased. Fig. 7 visualizes ▽FGSM and ▽SA of different noise sizes, which shows the same trend with Table 4 that ▽FGSM of CO model is the most significant change among all settings. Therefore, Conjecture 3 is verified, which provides a new understanding on why input noise prevents CO.
More discussion on NRF2 explaining earlier attempts of mitigating CO. Even though we mainly apply our NRF2 to understanding the SOTA technique of input noise in recent works (Zhang et al., 2022; de Jorge et al., 2022), it also well justifies earlier successful attempts. For example, the success of random initialization in (Wong et al., 2020) is conceptually similar to adding the noise on the input but the noise magnitude is limited by the allowable perturbation size. (Kim et al., 2020) alleviates CO by limiting the step size, which aligns well with our finding in Table 3. (Li et al., 2020) avoids CO by switching to PGD AT after detecting the occurrence of CO, the success of which is expected since PGD attack can effectively discourage the model from learning NRF2.
6 CONCLUSION
The reason for CO in FGSM AT remains not fully clear despite various attempts to mitigate it. In contrast to prior works mainly studying PGD Acc drop to understand CO, our work focuses on another intriguing phenomenon that FGSM Acc is higher than Standard Acc. We have found that there exists NRF2 whose usefulness is decreased under FGSM attack and CO can be seen as a dynamic process of learning such a type of NRF. Our investigation has also provided a new understanding of successful attempts on how to mitigate CO in recent works.
A APPENDIX
Experimental setups for NRF categorization in Fig. 2. At Step 1, we follow the settings in (Andriushchenko & Flammarion, 2020), and train M1 on CIFAR10 for 30 epochs and cyclic learning rate with the maximum learning rate 0.3. Both attack radius sizes for training at Step 1 and perturbation generation at Step 2 are set as 8/255. Based on the new dataset (X + δ, yrand), M2 is trained for 30 epochs with a constant learning rate 0.015.
Experimental setups for the finetuning experiment in Section 4.2. The first two steps of the finetuning experiment follow the same settings of that in Fig. 2, generating a new dataset (X + δ, yrand). at Step 3 , we first follow the settings of PGD AT in (Andriushchenko & Flammarion, 2020) and train a robust Mpgd. Based on the new dataset (X + δ, yrand), M2 is trained by finetuning on Mpgd for 30 epochs with a constant learning rate 0.005. | 1. What is the focus of the paper regarding catastrophic overfitting, and what are the proposed explanations for it?
2. What are the strengths and weaknesses of the paper, particularly in terms of its ideas, empirical methodology, and clarity?
3. Do you have any concerns or questions about the paper's experiments, such as their setup and relevance to the community?
4. How does the reviewer assess the novelty and reproducibility of the paper's content?
5. Are there any speculations or unsubstantiated claims in the paper, and how do they affect the overall quality of the work? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper proposes a new explanation for catastrophic overfitting based on the existence of a special type of non-robust features which FGSM makes easier to learn. To demonstrate this, the authors reuse the experiments of (lyas et al. 2019) to validate their conjecture by showing that a training on a dataset containing only this type of non-robust features yields non-trivial accuracy. They later discuss how their new hypothesis can explain other observations regarding CO with special emphasis in explaining the success of injecting noise during fast-AT. Their working hypothesis, which they validate on some experiments, is that adding noise makes learning the non-robust features harder.
Strengths And Weaknesses
Strengths
Interesting idea: The idea that CO could be caused by the existence of certain kinds of non-robust features in the data is very compelling and valuable.
Good empirical methodology I really appreciate the presentation of the experiments in this work in which each experiment is motivated by a research question and a plausible hypothesis.
Interesting insights for fast AT methods personally, I believe that the insights in Sec. 5 can be relevant to the community as they give more information as to why fast AT methods based on noise injection work.
Weaknesses
Only partial and weak evidence One of the main weaknesses of this work is that the main experiment in Sec 3 only provides some weak signal in support of the main hypothesis. 17% accuracy is indeed slightly higher than trivial accuracy, but it is significantly lower than the original 48% used by Ilyas et al. to corroborate their conjecture. In this regard, ss the hypothesis presented in this work is rather complex, one would expect to have more convincing evidence that it is true in all settings. Something that would alleviate my concerns would be seeing a replication of this experiment on other datasets (e.g. SVHN, CIFAR100, ImageNet100…) and other models which showed that the weak numbers observed for CIFAR10 are at least consistent in other settings.
Poor clarity In general, I have found the central sections of this work very hard to read. The constant use of acronyms, references to observations as non-descriptive Phenomenon 1 or 2, and in general, the convoluted writing to describe an already-complex hypothesis, make this paper very hard to read.
Unpolished experiments I find some experimental setups in the text a bit strange and full of unjustified moving pieces that just add further complexity to test complex conjectures. For example, I am not sure I understand why the regularisation term of Eq. 4 is fully necessary, or why as robust model is needed to verify conjecture 2.
Some speculation While I certainly believe this work provides partial evidence to connect CO with the features of the data, I believe it does not do the same to explain the sudden onset of CO in fast AT. The explanations of the authors in this regard are not substantiated by clear evidence and, thus, remain speculations in my mind.
No clear refutation of alternative hypothesis Although in their introduction, the authors argue that the non-linear hypothesis of Andriushchenko & Flammarion cannot explain Phenomenon 2, there is no clear evidence in this work that explains why it is wrong. There could be other additional hypothesis on top of their non-linear conjecture that could explain that phenomenon, However, the NRF hypothesis of this work does fail to explain the key observations of Andriushchenko & Flammarion which is that after CO the models become highly non-linear. In the view of the authors, how does the NRF theory then explain this phenomenon, and why does GradAlign succeed in preventing CO?
Clarity, Quality, Novelty And Reproducibility
Clarity: The paper is hard to read.
Quality The idea is compelling and valuable, but the experiments could be cleaned up.
Novelty I believe this work is novel.
Reproducibility The main experiments are probably reproducible, However, the main findings are only supported by weak evidence on a single dataset. |
ICLR | Title
Understanding Catastrophic Overfitting in Fast Adversarial Training From a Non-robust Feature Perspective
Abstract
To make adversarial training (AT) computationally efficient, FGSM AT has attracted significant attention. The fast speed, however, is achieved at the cost of catastrophic overfitting (CO), whose reason remains unclear. Prior works mainly study the phenomenon of a significant PGD accuracy (Acc) drop to understand CO while paying less attention to its FGSM Acc. We highlight an intriguing CO phenomenon that FGSM Acc is higher than accuracy on clean samples and attempt to apply non-robust feature (NRF) to understand it. Our investigation of CO by extending the existing NRF into fine-grained categorization suggests: there exists a certain type of NRF whose usefulness is increased after FGSM attack, and CO in FGSM AT can be seen as a dynamic process of learning such NRF. Therefore, the key to preventing CO lies in reducing its usefulness under FGSM AT, which sheds new light on understanding the success of a SOTA technique for mitigating CO.
1 INTRODUCTION
Despite impressive performance, deep neural networks (DNNs) (LeCun et al., 2015; He et al., 2016; Huang et al., 2017; Zhang et al., 2019a; 2021) are widely recognized to be vulnerable to adversarial examples (Szegedy et al., 2013; Biggio et al., 2013; Akhtar & Mian, 2018). Without giving a false sense of robustness against adversarial attacks (Carlini & Wagner, 2017; Athalye et al., 2018; Croce & Hein, 2020), adversarial training (AT) (Madry et al., 2018; Zhang et al., 2019c) has become the de facto standard approach for obtaining an adversarially robust model via solving a min-max problem in two-step manner. Specifically, it first generates adversarial examples by maximizing the loss, then trains the model on the generated adversarial examples by minimizing the loss. PGD-N AT (Madry et al., 2018; Zhang et al., 2019c) is a classical AT method, where N is the iteration steps when generating the adversarial samples in inner maximization. Notably, PGD-N AT is N times slower than its counterpart standard training with clean samples. A straightforward approach to make AT faster is to set N to 1, i.e reducing the attack in the inner maximization from multi-step PGD to single-step FGSM (Goodfellow et al., 2015). For simplicity, PGD-based AT and FGSM-based fast AT are termed PGD AT and FGSM AT, respectively.
FGSM AT often fails with a sudden robustness drop against PGD attack while maintaining its robustness against FGSM attack, which is called catastrophic overfitting (CO) (Wong et al., 2020). With Standard Acc denoting the accuracy on clean samples while FGSM Acc and PGD Acc indicating the accuracy under FGSM and PGD attack, we emphasize that a CO model is characterized by two main phenomena as follows.
• Phenomenon 1: The PGD Acc drops to a value close to zero when CO happens (Wong et al., 2020; Andriushchenko & Flammarion, 2020).
• Phenomenon 2: FGSM Acc is higher than Standard Acc for a CO model (Kim et al., 2020; Andriushchenko & Flammarion, 2020).
Multiple works (Wong et al., 2020; Kim et al., 2020; Andriushchenko & Flammarion, 2020) have focused on understanding CO by explaining the drop of PGD Acc in Phenomenon 1; however, they pay less attention to Phenomenon 2 regarding FGSM Acc. Specifically for Phenomenon 1, FGSM-RS (Wong et al., 2020) attributes it to the lack of perturbation diversity in FGSM AT, which
is refuted by a follow-up GradAlign (Andriushchenko & Flammarion, 2020) by demonstrating a co-occurrence of local non-linearity and the PGD Acc drop. However, these understandings cannot explain why FGSM Acc is higher than Standard Acc for a CO model in Phenomenon 2.
In the context of adversarial learning, numerous works (Goodfellow et al., 2015; Tabacof & Valle, 2016; Tanay & Griffin, 2016; Koh & Liang, 2017; Nakkiran, 2019; Athalye et al., 2018; Zhang et al., 2020) have attempted to explain why adversarial examples exist from different angles, among which non-robust feature (NRF) (Ilyas et al., 2019) is a popular one which also aligns well with all other explanations (Goodfellow et al., 2015; Tabacof & Valle, 2016; Tanay & Griffin, 2016; Koh & Liang, 2017; Nakkiran, 2019; Athalye et al., 2018). Such compatibility suggests that the NRF perspective constitutes an essential tool for understanding adversarial vulnerability, to which CO is also directly related. Specifically, the authors of (Ilyas et al., 2019) define the positive-correlation between features and true labels as feature usefulness (see Section 3.1 for more detailed definitions). Therefore, the adversarial vulnerability of DNNs is attributed to the existence of non-robust features (NRFs), which can be made anti-correlated with the true label under adversary. This understanding of NRFs in (Ilyas et al., 2019) well aligns with the fact that a CO model achieves close to zero robustness against PGD attack, and thus motivates us to believe that the NRF perspective might be an auspicious direction for understanding CO in FGSM AT.
The NRF in (Ilyas et al., 2019) is defined with PGD attack, which is followed in this work; however, we extend their NRF framework by additionally considering FGSM attack for fine-grained categorization. Considering the difference of adversarial attack strength between FGSM and PGD attack, GradAlign (Andriushchenko & Flammarion, 2020) explains Phenomenon 1 by demonstrating how well the attack variant (FGSM or PGD attack) can solve the inner maximization problem in AT. We start our investigation by providing an alternative interpretation of this adversarial strength difference between the two attack variants within the NRF framework (Ilyas et al., 2019), named strength-based NRF categorization. Despite aligning well with Phenomenon 1, We find that this strength-based categorization cannot explain Phenomenon 2 since the usefulness of these NRFs is decreased under FGSM attack and leads to an decrease (instead of increase in Phenomenon 2) of classification accuracy on FGSM adversarial examples than clean samples.
To understand Phenomenon 2 in CO from the NRF perspective, we conjecture that there exists a type of NRF whose usefulness is increased under FGSM attack, thus can lead to a higher FGSM Acc than Standard Acc (Phenomenon 2). In other words, if such type of NRFs (NRF2 in the following categorization) exists, Phenomenon 2 can be justified. Considering whether the usefulness is decreased or increased under FGSM attack, we propose a direction-based NRF categorization where NRF2 (NRF1) leads to the increase (decrease) of classification accuracy under FGSM attack. To prove the existence of NRF2, we follow the procedure of verifying the existence of NRF in (Ilyas et al., 2019). Moreover, we show that NRF2 can cause a significant PGD Acc drop , which also helps justify Phenomenon 1 in CO.
Overall, towards understanding CO in FGSM AT, our contributions are summarized as follows:
• Our work shifts the previous focus on PGD Acc in Phenomenon 1 to FGSM Acc in Phenomenon 2 for understanding CO. Given NRF as a popular perspective on adversarial vulnerability, we are the first to attempt at applying it to explain Phenomenon 2.
• We extend the existing NRF framework under PGD attack (Ilyas et al., 2019) to more fine-grained NRF categorization by FGSM attack. We verify the existence of NRF2 and show that its existence well justifies Phenomenon 2 (as well as Phenomenon 1).
• Very recent works show that adding noise on the image input achieves SOTA performance for FGSM AT. However, their mechanism of such a simple technique preventing CO remains not fully clear, for which our NRF2 perspective shed new light on its success.
2 PROBLEM OVERVIEW AND RELATED WORK
2.1 FGSM AT AND EXPERIMENTAL SETUPS
Let D denote a data distribution with (x, y) pairs and f(·, θ) parameterized by θ denote a deep model. For standard training, the model f(·, θ) is trained on D by minimizing E(x,y)∼D[l(f(x, θ), y)], where l indicates a cross-entropy loss for a typical multi-class classification task. Adversarial training
(AT) (Madry et al., 2018) for obtaining a robust model is formalized as a min-max optimization problem:
argmin θ E(x,y)∼D [ max δ∈S l(f(x+ δ; θ), y) ] , (1)
where S is a perturbation limitation (ϵ with the l∞ constraint in this work). The outer minimization problem in AT is often the same as standard training; however, AT has an unique inner maximization problem that seeks a perturbation inside the S for maximizing the optimization loss. PGD AT and FGSM AT are two typical adversarial training methods with PGD attack and FGSM attack solving the inner maximization problem, respectively.
Experimental setups. Unless specified, we follow the settings in GradAlign (Andriushchenko & Flammarion, 2020) during training and evaluation. The experiments are conducted on CIFAR10 with PreAct ResNet-18, trained for 30 epochs with cyclic learning rates and half-precision training. We adopt SGD optimizer with weight decay 5 × 10−4, and the maximum learning rate is set to 0.2. ℓ∞ attack with perturbation constraint ϵ=8/255 is applied in both training and evaluation. Following (Wong et al., 2020; Andriushchenko & Flammarion, 2020), we calculate the Standard accuracy (Standard Acc) on clean samples, FGSM accuracy (FGSM Acc), and PGD accuracy (PGD Acc) under PGD-50-10 attack (performing PGD-50 attack with ten restarts and step size α = ϵ/4) for evaluation.
2.2 CATASTROPHIC OVERFITTING IN FGSM AT
What are the CO Phenomena? Notably, a model trained only on adversarial examples generated by FGSM attack in FGSM AT still has robustness against PGD attack. In practice, this robustness is only slightly lower than that of much more computationally expensive PGD AT. However, this robustness level can often not be maintained till the end of training as a classical PGD AT. Specifically, as the FGSM AT evolves, the model robustness against PGD attack first increases but then enters a phase where the robustness quickly drops to and stays at zero. Following (Wong et al., 2020), this phase is termed catastrophic overfitting (CO). Another intriguing phenomenon related to CO is that for a model at the phase of CO, it achieves a higher FGSM Acc than Standard Acc (Kim et al., 2020; Andriushchenko & Flammarion, 2020). We term these two phenomena regarding CO as Phenomenon 1 and Phenomenon 2 respectively, as in Section 1.
How to explain the CO phenomena? With the finding that random initialization of perturbation helps alleviate CO (Wong et al., 2020), a tempting explanation suggests that the CO in FGSM AT lies in the lack of perturbation diversity, which has been refuted by (Andriushchenko & Flammarion, 2020). Instead, it attributes the reason for the PGD Acc drop to local non-linearity, which is quantified by the gradient alignment: cos(∇xℓ(x, y; θ),∇xℓ(x+η, y; θ)). The local non-linearity (low gradient alignment) indicates a low linear approximation quality of FGSM perturbations to PGD perturbations. In other words, local non-linearity means that the inner maximization problem in Eq 1 cannot be solved accurately by FGSM. It is demonstrated in (Andriushchenko & Flammarion, 2020) that local linearity decreases significantly when CO happens in FGSM AT. Their perspective is mainly dependent on the co-occurrence between non-linearity and the drop of PGD Acc. In other words, the non-linearity perspective exclusively focuses on explaining Phenomenon 1, for which this work provides an alternative NRF explanation (see Section 3). More importantly, our work fills the gap to explain Phenomenon 2 from a NRF perspective (see Section 4).
How to prevent CO? With the focus on Phenomenon 1, numerous works have attempted to prevent CO. Fast AT (Wong et al., 2020) is the first to show FGSM AT can achieve comparable robustness as PGD AT of “free" variants (Shafahi et al., 2019; Zhang et al., 2019b). A follow-up work (Andriushchenko & Flammarion, 2020) shows that CO still occurs in (Wong et al., 2020) when the step size increases and introduces a regularization loss (GradAlign) for maximizing local linearity to avoid CO. Other successful attempts for avoiding CO include adaptive perturbation size (Kim et al., 2020), dynamic dropout scheduling (Vivek & Babu, 2020) and detection-based alternating strategy (Li et al., 2020). Intriguingly, very recent works (Zhang et al., 2022; de Jorge et al., 2022) have shown that adding noise on the image input is sufficient for preventing collapse and achieves SOTA performance. However, the reason for its success remains not fully clear, for which our NRF perspective with direction-based categorization provides an explanation (see Section 5).
3 NON-ROBUST FEATURE PERSPECTIVE ON ADVERSARIAL TRAINING
Before investigating CO from the NRF perspective, we first revisit the definition and methodology of robust and non-robust features defined in (Ilyas et al., 2019) (Fig. 1(a)). Considering the difference of attack strength between FGSM attack and PGD attack, we extend the non-robust features defined in (Ilyas et al., 2019) to a fine-grained categorization under FGSM attack (strength-based categorization in Fig. 1(b)) and discuss its relationship with CO phenomena.
3.1 BACKGROUND ON FEATURE USEFULNESS AND ROBUSTNESS
Here, we revisit the definitions and methodology of DNN features introduced in (Ilyas et al., 2019). According to (Ilyas et al., 2019), a feature is defined as a function mapping from the input space X to real numbers, i.e f : X → IR, where IR can be the label space in classification task. Therefore, a DNN classifier can be perceived as a function utilizing a set of useful features for label prediction (Ilyas et al., 2019), where useful features in (Ilyas et al., 2019) are characterized by their positive correlation with true label, defined as:
• ρ-useful features: A feature f is ρ-useful (ρ > 0) if it is correlated with the true label in expectation, shown as follows:
IE(x,y)∼D[y · f(x)] ≥ ρ. (2)
To understand adversarial vulnerability, (Ilyas et al., 2019) further proposes to dichotomize the above useful features into robust features (RFs) and non-robust features (NRFs), defined as follows:
• Robust feature (RFs): a useful feature f is robust if there exists a γ > 0 for it to be γ-robustly useful under some specified set of valid perturbations ∆, shown as follows:
IE(x,y)∼D[ inf δ∈∆(x)
y · f(x+ δ)] ≥ γ. (3)
• Non-robust feature (NRFs): a useful feature f is non-robust if γ > 0 does not exist.
Adversarial vulnerability can be attributed to the existence of NRFs (Ilyas et al., 2019). As discussed in (Ilyas et al., 2019), adversarial vulnerability is caused by the presence of NRFs which are useful and predictive. According to (Ilyas et al., 2019), “in the presence of an adversary, any useful but non-robust features can be made anti-correlated with the true label, leading to adversarial vulnerability" (Ilyas et al., 2019). Therefore, adversarial training obtains a robust model by discouraging from learning NRFs. In practice, finding a worst-case perturbation under a certain budget for Eq. 3is not feasible since it is often an NP-hard problem (Katz et al., 2017; Weng et al., 2018), and thus (Ilyas et al., 2019) uses multi-step PGD attack to approximate such a worst-case solution when investigating NRFs.
Fig. 1 (a) summarizes the feature definition in (Ilyas et al., 2019). Specifically, the plus sign (+) indicates the useful features which has positive correlation with true labels, while the minus sign (−) indicates anti-correlated features under PGD attack.
Verifying the existence of NRFs (Ilyas et al., 2019). The procedure verifying the existence of NRFs in (Ilyas et al., 2019) is summarized in Fig. 2(a) by three steps. At Step 1, it trains a model M1 with standard training on the original training set (Xtrain, y), where Xtrain and y indicate the training sample and its corresponding true label, respectively. At Step 2, it first randomly picks a random label yrand for each training sample to ensure that the training set Xtrain has no features with a positive correlation with the random label yrand. After that, perturbation δ is generated by PGD attack on M1 by making sample prediction f(x+ δ) close to yrand. This step aims to generate a perturbation δ
which includes NRFs related to yrand. At Step 3, model M2 is trained on the new dataset (Xtrain+δ, yrand) generated at Step 2, and then evaluated on the original test dataset with true labels (Xtest, y). According to (Ilyas et al., 2019), the perturbation δ is the only connection between Xtrain + δ and yrand since there is no positive correlation between Xtrain and yrand. Therefore, if model M2 achieves higher accuracy than random prediction (e.g 10% for CIFAR10) on the original test dataset (Xtest, y) with true labels, the existence of NRFs in δ is verified. We re-implement this experiment in (Ilyas et al., 2019), and M2 achieves a accuracy of 48.16% (with five independent runs), as shown in Table 1, which verifies the existence of NRFs as in (Ilyas et al., 2019).
3.2 STRENGTH-BASED NRF CATEGORIZATION
It is widely known that PGD attack is stronger than FGSM attack, which is supported by the finding that FGSM Acc is higher than PGD Acc under the same l∞ perturbation budget (Madry et al., 2018). Thus, PGD Acc is often adopted as a common metric to evaluate the model robustness. FGSM AT is faster than PGD AT but at the cost of a mildly lower PGD Acc (than PGD AT) even when CO does not happen in FGSM AT. When CO occurs, the PGD Acc drops to a value close to zero (Phenomenon 1). Since the difference between PGD AT and FGSM AT lies in the attack variant, GradAlign (Andriushchenko & Flammarion, 2020) explains their difference based on how well the adopted attack can solve the inner maximization problem. Specifically, FGSM AT yields lower robustness because FGSM attack cannot solve the problem as accurately as PGD attack because PGD attack is stronger than its FGSM counterpart. The following discussion provides an alternative interpretation of the attack strength-based explanation in (Andriushchenko & Flammarion, 2020) from the NRF perspective.
Intuitive categorization. Considering the attack strength difference, the NRFs can be divided into two types, as shown in Figure 1(b). The first type of NRFs is named as double non-robust feature (DNRF) since it can be made anti-correlated with the true labels by both FGSM and PGD attack. The existence of DNRF explains why FGSM AT yields a more robust model than standard training against PGD attack during evaluation. By contrast, the other type of NRFs is called single non-robust feature (SNRF) since it is made anti-correlated with true labels by PGD attack but is still positive-correlated with true labels under FGSM attack.
Experimental verification of DNRF. This setup follows the procedure in (Ilyas et al., 2019) (Fig. 2(a)) with a small modification. With its definition, DNRF has the property of being made anti-correlated with true labels under both PGD attack and FGSM attack. Therefore, the existence of DNRF ensures that the test acc will
also be higher than random guess (10% for CIFAR10) if we replace the PGD attack at Step 2 with FGSM attack, as shown in Fig. 2(b). This is confirmed by an accuracy of 20.01% on the original test set, see Table 1.
On SNRF and its relationship with CO phenomena. It is challenging to directly verify the existence of SNRF. The phenomenon that 20.01% (FGSM attack) is lower than 48.61% (PGD attack) in Table 1 can be seen as an indirect evidence for the existence of SNRFs which can be extracted by PGD attack but not FGSM attack. Even though direct empirical verification of SNRF is challenging, its theoretical existence is straightforward as long as FGSM attack is weaker than PGD attack. Moreover, the weaker the FGSM attack (compared with PGD attack), the more SNRF. FGSM AT cannot effectively discourage the model from learning SNRF as PGD AT, and thus we can attribute the lower PGD Acc of FGSM AT than PGD AT to the existence of SNRF. However, SNRF under strength-based NRF categorization might (at most) partly explain CO Phenomenon 1 but cannot justify CO Phenomenon 2. The reason is that the model might have a very low PGD Acc, but FGSM Acc cannot be higher than Standard Acc even in an extreme case when all the NRFs become SNRF due to a very weak FGSM attack. The following section introduces a new NRF categorization to better explain CO phenomena, especially Phenomenon 2.
4 DIRECTION-BASED NRF CATEGORIZATION FOR UNDERSTANDING CO
PHENOMENA
Direction-based NRF categorization. Similar to the above strength-based NRF categorization, the categorization here considers FGSM attack but differs by a key assumption: whether the usefulness of certain NRFs is decreased or increased under FGSM attack. We call this NRF categorization as direction-based, which is defined as follows:
• NRF1: NRF1 is a type of NRF whose usefulness is decreased after FGSM attack, thus can be exploited by FGSM attack to decrease the classification accuracy after FGSM attack.
• NRF2: NRF2 is a type of NRF whose usefulness is increased after FGSM attack, thus can be exploited by FGSM attack to increase the classification accuracy after FGSM attack.
NRF1 and NRF2 still follow the definition of NRF regarding PGD attack. In other words, the usefulness of both NRF1 and NRF2 is decreased after PGD attack. The change of their usefulness after attacks is summarized in Fig. 3, where the increase and decrease of feature usefulness are denoted by the ↑ and ↓, respectively.
4.1 ON NRF2 EXISTENCE AND ITS EXPLANATION FOR PHENOMENON 2
When we discuss DNRF and SNRF in Section 3.2, by default, we assume that their usefulness is decreased after FGSM attack, and thus they can be seen as NRF1. In other words, the existence of NRF1 is straightforward; however, it is unclear whether NRF2 actually exists.
Conjecture 1: We conjecture that there exists NRF2, and the FGSM attack in AT encourages the model to learn NRF2.
Differences between verifying NRF1 and NRF2. The experimental procedure of verifying NRF2 is shown in Fig. 2(c). The key reason why procedures in Fig. 2 can verify the existence of certain NRFs is that the generated perturbation δ is the only connection between Xtrain+ δ and yrand, and it should include certain NRFs related to yrand. In other words, the usefulness of certain NRFs should be increased after attack at Step 2 of Fig. 2. For FGSM attack, f(x+ δ) is optimized to be far from the true label y by maximizing the loss l(f(x+ δ), y), and the usefulness of NRF1 and NRF2 are decreased and increased by definition, respectively(see Fig. 3). Therefore, to increase the usefulness of NRF1 at Step 2, the optimization goal should be close to yrand, as shown in Fig. 2(b). By contrast, to verify the existence of NRF2, the optimization goal at Step 2 should follow that of FGSM attack, i.e far from yrand, as shown in Fig. 2(c), which increases the usefulness of NRF2.
Verification of Conjecture 1. As discussed above, verifying the existence of NRF2 requires an opposite optimization goal with that of NRF1 at Step 2 (see Fig. 2(c)). For the model M1 at Step 1, we adopt FGSM AT with the results reported in Table 2. When M1 at Step 1 is set to a CO model with FGSM AT, our model M2 evaluated on the original test set achieves an accuracy of around
17.52%± 1.59% (with five independent runs), which verifies the existence of NRF2 since it is higher than 10% (random prediction for CIFAR10). Given that the usefulness of NRF2 is increased after FGSM attack, the existence of NRF2 in a model after CO justifies why FGSM Acc can be higher than Standard Acc (CO Phenomenon 2). Moreover, we experiment with setting the M1 at Step 1 to one with standard training, the Test Acc of M2 is close to random prediction, suggesting that it is the FGSM attack in AT that encourages the model to learn NRF2.
Does NRF2 exist in a model before CO in FGSM AT? It is interesting to investigate whether NRF2 exists for the FGSM AT model before CO. To this end, we set M1 at Step 1 of Fig. 2(c) to a FGSM AT model saved before CO, which yields an M2 with an accuracy close to random prediction(see FGSM AT(Before CO) in Table 2). This indicates that NRF2 mainly exists in the model after CO in FGSM AT, which further confirms the relationship between CO and NRF2.
Can NRF2 be exploited by FGSM attack to decrease FGSM accuracy? (Kim et al., 2020) reports that FGSM Acc is higher than Standard Acc when CO happens in FGSM AT. Here, we show that this is not always the case if we evaluate FGSM Acc of the CO model with different step sizes, as shown in Table 3. Note that the result with step size of zero indicates the Standard Acc. We find that FGSM Acc is higher (lower) than Standard Acc when the step size is relatively large (small). The results suggest that NRF2 can still be exploited by an FGSM attack with a smaller step size to be anti-correlated with the true label. In other words, step size plays a non-trivial role when FGSM attack exploiting NRF2. The above results well explain why CO only occurs when the step size is set to a relatively large value (Wong et al., 2020).
Table 3: Evaluate FGSM Acc of CO model under different step sizes.
step size (/255) 0 1 2 3 4 5 6 7 8
FGSM Acc (%) 85.77 37.27 41.57 66.78 86.93 95.16 96.94 96.63 94.72
A dynamic view on the CO from the
NRF2 perspective. Prior works analyzing CO mainly focus on Phenomenon 1 about low PGD Acc, which seems to be a pseudostatic state since the PGD Acc stays at zero after CO. Here, we investigate CO model further by analyzing FGSM Acc at different epochs, as shown in Figure 4. Fig. 4 show that the FGSM Acc under large step sizes consistently gets higher with more
training epochs, suggesting the model continues to rely more on NRF2. In other words, CO can be perceived as a dynamic state of learning NRF2, which does not stop after the drop of PGD Acc. This is reasonable because NRF2 can be very useful features under FGSM attack.
4.2 CAN NRF2 ALSO JUSTIFY PHENOMENON 1?
The above analysis verifies the existence of NRF2 in a CO model, which well justifies the improved accuracy after FGSM attack (Phenomenon 2). Here, we discuss whether it can be used to justify Phenomenon 1. Regarding the relationship between NRF2 and PGD Acc, we formulate the following conjecture.
Conjecture 2: We conjecture that NRF2 can be a cause of a significant PGD Acc drop.
Verification of Conjecture 2. To verify Conjecture 2, we finetune a robust model on a training dataset with and without such NRF2, respectively, and evaluate the PGD Acc on the original test set with true labels (Xtest, y). To minimize the influence of other NRF types, we adopt a model pretrained by PGD AT, which mainly has RFs, for the finetuning experiment. Specifically, we adopt the generated new training dataset (X + δ, yrand) at Step 2 of Fig. 2(c) as the one with NRF2. For the counterpart dataset without NRF2, we remove the added perturbation δ, and thus (X , yrand) is
used for training. The basic loss is set to cross-entropy (CE) to encourage learning the features, if any, in the generated dataset. However, the accuracy will quickly reduce to zero due to the random choice of yrand. Thus, a KL loss, which encourages the output of the finetuned model to be close to that the pretrained mode, is added on top of the CE loss to encourage the model in the finetuning process to maintain the original RFs. The total loss is shown as follows:
Lossfinetune = CE(f(x+ δ; θ), yrand) + λ ∗KL(f(x+ δ; θpretrain), f(x+ δ; θ)), (4)
where f(x; θpretrain) and f(x; θ) indicate the pretrained PGD model and finetuning model, respectively. A detailed setup for this experiment is reported in the Appendix. The results with λ set to 5 are shown in Fig. 5. We observe that the PGD Acc can be maintained around 25% after 30 epochs of finetuning for the dataset (X , yrand) which contains no features. By contrast, under the same setting, the PGD Acc quickly decreases to a value close to zero for the generated dataset (X + δ, yrand) which contains NRF2. The contrasting results verify the claim in Conjecture 2.
Additional results with other λ values in Equation 4 are report in Fig. 6. As λ gets larger, the model finetuned on (X , yrand) maintains more RFs learned in pretrained weights θpretrain, leading to an increase in accuracy. However, the PGD Acc for the model finetuned on (X + δ, yrand) (with NRF2) is zero for a wide range of λ values, which is much lower than the model finetuned on (X , yrand) (without NRF2). This further verifies the claim in Conjecture 2. Interestingly, the result in Fig. 6 can also be viewed as another proof for Conjeture 1. The Standard Acc, evaluated on the original test set (Xtest, y), is higher for the model finetuned on (X + δ, yrand) is higher than its counterpart on (X , yrand) for all λ in Fig. 6(b). This finding aligns with Conjecture 1 that there exists a type of NRF, which can be encouraged under FGSM attack.
Discussion on the drop speed of PGD Acc from the NRF2 perspective. As demonstrated in Section 4.1, CO can be perceived as a dynamic process of learning NRF2. With this insight, learning NRF2 in FGSM AT does not occur suddenly, which is supported by the finding that FGSM Acc still increases even after CO happens. If this is the case, how can we justify the sudden drop of PGD Acc within one epoch? At first sight, it seems that NRF2 can only explain the PGD Acc drop but not its drop speed. However, we argue that the sudden drop of PGD Acc is due to the worst-case property of PGD attack. Note that PGD attack seeks the most effective adversarial perturbation with multiple iterations to fool the model by exploiting the most vulnerable features in the model. In other words, the model is already vulnerable to PGD attack even if it only learns a small amount of NRF2 (one epoch regarding CO, for instance). After the PGD Acc drops to zero, the model continues to learn more NRF2, leading to a higher FGSM Acc.
5 NRF2 HELPS EXPLAIN HOW SOTA METHODS PREVENT CO
A recent work (Zhang et al., 2022) outperforms prior methods in FGSM AT by a large margin without additional computation overhead. Specifically, it shows that adding noise to the input (instead of initializing the adversarial perturbation with noise as in (Wong et al., 2020)) is critical for its success (Zhang et al., 2022). A similar finding has also been reported in another recent work (de Jorge
et al., 2022). However, why such a simple technique of adding noise on the images is so effective remains not fully clear. Here, we show that NRF2 sheds new light on their success.
Intuitively, the model tends to learn those features that are useful for prediction. Therefore, PGD AT mainly learns RFs because NRFs are not useful under PGD attack. With FGSM AT, the model is encouraged to learn NRF2 because FGSM attack increases its usefulness. Moreover, with our analysis in Section 4, CO can be seen as a dynamic process of learning NRF2. Therefore, the key to preventing CO in FGSM AT lies in decreasing the NRF2 usefulness under FGSM attack. Regarding why adding noise to the image input prevents CO, we establish the following hypothesis.
Conjecture 3: We conjecture that adding noise to the input decreases the usefulness of NRF2 under FGSM attack (indicated by FGSM Acc).
Verification of Conjecture 3. For facilitating the discussion, we divide all types of features into NRF2 and non-NRF2. A CO model has both NRF2 and non-NRF2, while a non-CO model mainly has non-NRF2. We evaluate the performance on a model without or with random noise added to the input and calculate the noise-induced change of Standard Acc and FGSM Acc (Table 4). Note that for FGSM Acc with noise, the noise is added to the input before the FGSM attack following (Zhang et al., 2022). For the model before CO, the noise has almost the same influence on the change of Standard Acc and FGSM Acc, i.e ▽SA of −0.70% (Standard Acc change) is close to
▽FGSM of −1.30%. We further conduct the same experiment on a CO model. Before adding noise, the FGSM Acc (94.67%) is higher than its standard Acc (85.77%), which can be attributed to NRF2 as in Conjecture 1. After adding noise, this trend is reversed (57.37% < 84.55%), suggesting Phenomenon 2 disappears in this setup. Moreover, ▽FGSM (−37.30%) is much more significant than ▽SA (−1.22%). Such a significant drop of FGSM Acc (▽FGSM ) on a CO model (with NRF2) suggests that the NRF2 usefulness under FGSM attack is significantly decreased. Fig. 7 visualizes ▽FGSM and ▽SA of different noise sizes, which shows the same trend with Table 4 that ▽FGSM of CO model is the most significant change among all settings. Therefore, Conjecture 3 is verified, which provides a new understanding on why input noise prevents CO.
More discussion on NRF2 explaining earlier attempts of mitigating CO. Even though we mainly apply our NRF2 to understanding the SOTA technique of input noise in recent works (Zhang et al., 2022; de Jorge et al., 2022), it also well justifies earlier successful attempts. For example, the success of random initialization in (Wong et al., 2020) is conceptually similar to adding the noise on the input but the noise magnitude is limited by the allowable perturbation size. (Kim et al., 2020) alleviates CO by limiting the step size, which aligns well with our finding in Table 3. (Li et al., 2020) avoids CO by switching to PGD AT after detecting the occurrence of CO, the success of which is expected since PGD attack can effectively discourage the model from learning NRF2.
6 CONCLUSION
The reason for CO in FGSM AT remains not fully clear despite various attempts to mitigate it. In contrast to prior works mainly studying PGD Acc drop to understand CO, our work focuses on another intriguing phenomenon that FGSM Acc is higher than Standard Acc. We have found that there exists NRF2 whose usefulness is decreased under FGSM attack and CO can be seen as a dynamic process of learning such a type of NRF. Our investigation has also provided a new understanding of successful attempts on how to mitigate CO in recent works.
A APPENDIX
Experimental setups for NRF categorization in Fig. 2. At Step 1, we follow the settings in (Andriushchenko & Flammarion, 2020), and train M1 on CIFAR10 for 30 epochs and cyclic learning rate with the maximum learning rate 0.3. Both attack radius sizes for training at Step 1 and perturbation generation at Step 2 are set as 8/255. Based on the new dataset (X + δ, yrand), M2 is trained for 30 epochs with a constant learning rate 0.015.
Experimental setups for the finetuning experiment in Section 4.2. The first two steps of the finetuning experiment follow the same settings of that in Fig. 2, generating a new dataset (X + δ, yrand). at Step 3 , we first follow the settings of PGD AT in (Andriushchenko & Flammarion, 2020) and train a robust Mpgd. Based on the new dataset (X + δ, yrand), M2 is trained by finetuning on Mpgd for 30 epochs with a constant learning rate 0.005. | 1. What is the main contribution of the paper regarding catastrophic overfitting?
2. What are the strengths and weaknesses of the paper, particularly in terms of organization, citations, and explanations?
3. Do you have any concerns or suggestions regarding the paper's content, such as the definition of robust/non-robust features or the need for further experiments and theoretical justification?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper studies the phenomenon of catastrophic overfitting (CO) in adversarially robust training with a single step perturbation method FGSM. It brings ones's attention to an intriguing property of CO, namely the accuracy on FGSM attacked samples is higher than that on clean samples. The authors try to explain this phenomenon from a robust/non-robust feature perspective. They conjecture the existence of a special type of non-robust feature that is particularly useful to the classification, and verify it based on controlled experiments. They also show that the non-robust feature perspective can potentially explain the recent practice on combating CO, namely adding random noise on the inputs.
Strengths And Weaknesses
This paper is well-organized and presents the problem clearly. It focuses on an important problem in adversarially robust learning, and made interesting observations.
My biggest concern of this paper is that it might miss an important citation. [1] has discussed in detail about the phenomenon studied in this paper, namely accuracy against FGSM adversarial examples are higher than standard accuracy. In their paper, the phenomenon is dubbed as the "label leaking" effect. They provide a very simple explanation to this phenomenon. One-step attack methods that use the true label perform a very predictable transformation that the model can learn to recognize. The generated adversarial examples thus may inadvertently leak information about the true label. I believe this explanation is not discussed anywhere in this paper. Furthermore, such a simple explanation and the observations made in this paper are not contradictory. In fact, the observations in this paper can further support the 'label leaking' explanation.
Nevertheless, some observations made in this paper are indeed novel and interesting. For example, in section 4.1, at "verification of conjecture 1", the authors verified the existence of a particular type of non-robust feature (NRF2) by showing the adversarial examples generated by a CO model can be learned by another model and achieve non-trivial standard accuracy. Compared to the original CO observation, this further shows that the information about the true label leaked in the adversarial example is in fact transferable, which is worth further investigation.
[1] Adversarial Machine Learning At Scale. Kurakin et al.
Clarity, Quality, Novelty And Reproducibility
This paper is generally organized well but its clarity needs to be improved. For example, the robust/non-robust features are defined in Section 3.1, but are never used in the rest of the paper. Instead of plain words, it would be better if the authors can extend the definitions to abstract the claims in a rigorous manner.
The quality of the paper relies on the correctness and comprehensiveness of the experiments, given only empirical evidences are presented in the paper. I believe experiments with more settings can help, for example, different model architecture and datatsets. But it would be best if the authors can work on a toy datasets (e.g. in [2]) and provide some theoretical justification.
The novelty of this paper is mostly attributed to the new observations about the CO phenomnenon. The focus on the specicial property of CO (phenomenon 2 as mentioned in this paper) itself is not novel since it has been discussed in previous works [1].
[1] Adversarial Machine Learning At Scale. Kurakin et al.
[2] Adversarial Examples Are Not Bugs, They Are Features. Ilyas et al. |
ICLR | Title
Understanding Catastrophic Overfitting in Fast Adversarial Training From a Non-robust Feature Perspective
Abstract
To make adversarial training (AT) computationally efficient, FGSM AT has attracted significant attention. The fast speed, however, is achieved at the cost of catastrophic overfitting (CO), whose reason remains unclear. Prior works mainly study the phenomenon of a significant PGD accuracy (Acc) drop to understand CO while paying less attention to its FGSM Acc. We highlight an intriguing CO phenomenon that FGSM Acc is higher than accuracy on clean samples and attempt to apply non-robust feature (NRF) to understand it. Our investigation of CO by extending the existing NRF into fine-grained categorization suggests: there exists a certain type of NRF whose usefulness is increased after FGSM attack, and CO in FGSM AT can be seen as a dynamic process of learning such NRF. Therefore, the key to preventing CO lies in reducing its usefulness under FGSM AT, which sheds new light on understanding the success of a SOTA technique for mitigating CO.
1 INTRODUCTION
Despite impressive performance, deep neural networks (DNNs) (LeCun et al., 2015; He et al., 2016; Huang et al., 2017; Zhang et al., 2019a; 2021) are widely recognized to be vulnerable to adversarial examples (Szegedy et al., 2013; Biggio et al., 2013; Akhtar & Mian, 2018). Without giving a false sense of robustness against adversarial attacks (Carlini & Wagner, 2017; Athalye et al., 2018; Croce & Hein, 2020), adversarial training (AT) (Madry et al., 2018; Zhang et al., 2019c) has become the de facto standard approach for obtaining an adversarially robust model via solving a min-max problem in two-step manner. Specifically, it first generates adversarial examples by maximizing the loss, then trains the model on the generated adversarial examples by minimizing the loss. PGD-N AT (Madry et al., 2018; Zhang et al., 2019c) is a classical AT method, where N is the iteration steps when generating the adversarial samples in inner maximization. Notably, PGD-N AT is N times slower than its counterpart standard training with clean samples. A straightforward approach to make AT faster is to set N to 1, i.e reducing the attack in the inner maximization from multi-step PGD to single-step FGSM (Goodfellow et al., 2015). For simplicity, PGD-based AT and FGSM-based fast AT are termed PGD AT and FGSM AT, respectively.
FGSM AT often fails with a sudden robustness drop against PGD attack while maintaining its robustness against FGSM attack, which is called catastrophic overfitting (CO) (Wong et al., 2020). With Standard Acc denoting the accuracy on clean samples while FGSM Acc and PGD Acc indicating the accuracy under FGSM and PGD attack, we emphasize that a CO model is characterized by two main phenomena as follows.
• Phenomenon 1: The PGD Acc drops to a value close to zero when CO happens (Wong et al., 2020; Andriushchenko & Flammarion, 2020).
• Phenomenon 2: FGSM Acc is higher than Standard Acc for a CO model (Kim et al., 2020; Andriushchenko & Flammarion, 2020).
Multiple works (Wong et al., 2020; Kim et al., 2020; Andriushchenko & Flammarion, 2020) have focused on understanding CO by explaining the drop of PGD Acc in Phenomenon 1; however, they pay less attention to Phenomenon 2 regarding FGSM Acc. Specifically for Phenomenon 1, FGSM-RS (Wong et al., 2020) attributes it to the lack of perturbation diversity in FGSM AT, which
is refuted by a follow-up GradAlign (Andriushchenko & Flammarion, 2020) by demonstrating a co-occurrence of local non-linearity and the PGD Acc drop. However, these understandings cannot explain why FGSM Acc is higher than Standard Acc for a CO model in Phenomenon 2.
In the context of adversarial learning, numerous works (Goodfellow et al., 2015; Tabacof & Valle, 2016; Tanay & Griffin, 2016; Koh & Liang, 2017; Nakkiran, 2019; Athalye et al., 2018; Zhang et al., 2020) have attempted to explain why adversarial examples exist from different angles, among which non-robust feature (NRF) (Ilyas et al., 2019) is a popular one which also aligns well with all other explanations (Goodfellow et al., 2015; Tabacof & Valle, 2016; Tanay & Griffin, 2016; Koh & Liang, 2017; Nakkiran, 2019; Athalye et al., 2018). Such compatibility suggests that the NRF perspective constitutes an essential tool for understanding adversarial vulnerability, to which CO is also directly related. Specifically, the authors of (Ilyas et al., 2019) define the positive-correlation between features and true labels as feature usefulness (see Section 3.1 for more detailed definitions). Therefore, the adversarial vulnerability of DNNs is attributed to the existence of non-robust features (NRFs), which can be made anti-correlated with the true label under adversary. This understanding of NRFs in (Ilyas et al., 2019) well aligns with the fact that a CO model achieves close to zero robustness against PGD attack, and thus motivates us to believe that the NRF perspective might be an auspicious direction for understanding CO in FGSM AT.
The NRF in (Ilyas et al., 2019) is defined with PGD attack, which is followed in this work; however, we extend their NRF framework by additionally considering FGSM attack for fine-grained categorization. Considering the difference of adversarial attack strength between FGSM and PGD attack, GradAlign (Andriushchenko & Flammarion, 2020) explains Phenomenon 1 by demonstrating how well the attack variant (FGSM or PGD attack) can solve the inner maximization problem in AT. We start our investigation by providing an alternative interpretation of this adversarial strength difference between the two attack variants within the NRF framework (Ilyas et al., 2019), named strength-based NRF categorization. Despite aligning well with Phenomenon 1, We find that this strength-based categorization cannot explain Phenomenon 2 since the usefulness of these NRFs is decreased under FGSM attack and leads to an decrease (instead of increase in Phenomenon 2) of classification accuracy on FGSM adversarial examples than clean samples.
To understand Phenomenon 2 in CO from the NRF perspective, we conjecture that there exists a type of NRF whose usefulness is increased under FGSM attack, thus can lead to a higher FGSM Acc than Standard Acc (Phenomenon 2). In other words, if such type of NRFs (NRF2 in the following categorization) exists, Phenomenon 2 can be justified. Considering whether the usefulness is decreased or increased under FGSM attack, we propose a direction-based NRF categorization where NRF2 (NRF1) leads to the increase (decrease) of classification accuracy under FGSM attack. To prove the existence of NRF2, we follow the procedure of verifying the existence of NRF in (Ilyas et al., 2019). Moreover, we show that NRF2 can cause a significant PGD Acc drop , which also helps justify Phenomenon 1 in CO.
Overall, towards understanding CO in FGSM AT, our contributions are summarized as follows:
• Our work shifts the previous focus on PGD Acc in Phenomenon 1 to FGSM Acc in Phenomenon 2 for understanding CO. Given NRF as a popular perspective on adversarial vulnerability, we are the first to attempt at applying it to explain Phenomenon 2.
• We extend the existing NRF framework under PGD attack (Ilyas et al., 2019) to more fine-grained NRF categorization by FGSM attack. We verify the existence of NRF2 and show that its existence well justifies Phenomenon 2 (as well as Phenomenon 1).
• Very recent works show that adding noise on the image input achieves SOTA performance for FGSM AT. However, their mechanism of such a simple technique preventing CO remains not fully clear, for which our NRF2 perspective shed new light on its success.
2 PROBLEM OVERVIEW AND RELATED WORK
2.1 FGSM AT AND EXPERIMENTAL SETUPS
Let D denote a data distribution with (x, y) pairs and f(·, θ) parameterized by θ denote a deep model. For standard training, the model f(·, θ) is trained on D by minimizing E(x,y)∼D[l(f(x, θ), y)], where l indicates a cross-entropy loss for a typical multi-class classification task. Adversarial training
(AT) (Madry et al., 2018) for obtaining a robust model is formalized as a min-max optimization problem:
argmin θ E(x,y)∼D [ max δ∈S l(f(x+ δ; θ), y) ] , (1)
where S is a perturbation limitation (ϵ with the l∞ constraint in this work). The outer minimization problem in AT is often the same as standard training; however, AT has an unique inner maximization problem that seeks a perturbation inside the S for maximizing the optimization loss. PGD AT and FGSM AT are two typical adversarial training methods with PGD attack and FGSM attack solving the inner maximization problem, respectively.
Experimental setups. Unless specified, we follow the settings in GradAlign (Andriushchenko & Flammarion, 2020) during training and evaluation. The experiments are conducted on CIFAR10 with PreAct ResNet-18, trained for 30 epochs with cyclic learning rates and half-precision training. We adopt SGD optimizer with weight decay 5 × 10−4, and the maximum learning rate is set to 0.2. ℓ∞ attack with perturbation constraint ϵ=8/255 is applied in both training and evaluation. Following (Wong et al., 2020; Andriushchenko & Flammarion, 2020), we calculate the Standard accuracy (Standard Acc) on clean samples, FGSM accuracy (FGSM Acc), and PGD accuracy (PGD Acc) under PGD-50-10 attack (performing PGD-50 attack with ten restarts and step size α = ϵ/4) for evaluation.
2.2 CATASTROPHIC OVERFITTING IN FGSM AT
What are the CO Phenomena? Notably, a model trained only on adversarial examples generated by FGSM attack in FGSM AT still has robustness against PGD attack. In practice, this robustness is only slightly lower than that of much more computationally expensive PGD AT. However, this robustness level can often not be maintained till the end of training as a classical PGD AT. Specifically, as the FGSM AT evolves, the model robustness against PGD attack first increases but then enters a phase where the robustness quickly drops to and stays at zero. Following (Wong et al., 2020), this phase is termed catastrophic overfitting (CO). Another intriguing phenomenon related to CO is that for a model at the phase of CO, it achieves a higher FGSM Acc than Standard Acc (Kim et al., 2020; Andriushchenko & Flammarion, 2020). We term these two phenomena regarding CO as Phenomenon 1 and Phenomenon 2 respectively, as in Section 1.
How to explain the CO phenomena? With the finding that random initialization of perturbation helps alleviate CO (Wong et al., 2020), a tempting explanation suggests that the CO in FGSM AT lies in the lack of perturbation diversity, which has been refuted by (Andriushchenko & Flammarion, 2020). Instead, it attributes the reason for the PGD Acc drop to local non-linearity, which is quantified by the gradient alignment: cos(∇xℓ(x, y; θ),∇xℓ(x+η, y; θ)). The local non-linearity (low gradient alignment) indicates a low linear approximation quality of FGSM perturbations to PGD perturbations. In other words, local non-linearity means that the inner maximization problem in Eq 1 cannot be solved accurately by FGSM. It is demonstrated in (Andriushchenko & Flammarion, 2020) that local linearity decreases significantly when CO happens in FGSM AT. Their perspective is mainly dependent on the co-occurrence between non-linearity and the drop of PGD Acc. In other words, the non-linearity perspective exclusively focuses on explaining Phenomenon 1, for which this work provides an alternative NRF explanation (see Section 3). More importantly, our work fills the gap to explain Phenomenon 2 from a NRF perspective (see Section 4).
How to prevent CO? With the focus on Phenomenon 1, numerous works have attempted to prevent CO. Fast AT (Wong et al., 2020) is the first to show FGSM AT can achieve comparable robustness as PGD AT of “free" variants (Shafahi et al., 2019; Zhang et al., 2019b). A follow-up work (Andriushchenko & Flammarion, 2020) shows that CO still occurs in (Wong et al., 2020) when the step size increases and introduces a regularization loss (GradAlign) for maximizing local linearity to avoid CO. Other successful attempts for avoiding CO include adaptive perturbation size (Kim et al., 2020), dynamic dropout scheduling (Vivek & Babu, 2020) and detection-based alternating strategy (Li et al., 2020). Intriguingly, very recent works (Zhang et al., 2022; de Jorge et al., 2022) have shown that adding noise on the image input is sufficient for preventing collapse and achieves SOTA performance. However, the reason for its success remains not fully clear, for which our NRF perspective with direction-based categorization provides an explanation (see Section 5).
3 NON-ROBUST FEATURE PERSPECTIVE ON ADVERSARIAL TRAINING
Before investigating CO from the NRF perspective, we first revisit the definition and methodology of robust and non-robust features defined in (Ilyas et al., 2019) (Fig. 1(a)). Considering the difference of attack strength between FGSM attack and PGD attack, we extend the non-robust features defined in (Ilyas et al., 2019) to a fine-grained categorization under FGSM attack (strength-based categorization in Fig. 1(b)) and discuss its relationship with CO phenomena.
3.1 BACKGROUND ON FEATURE USEFULNESS AND ROBUSTNESS
Here, we revisit the definitions and methodology of DNN features introduced in (Ilyas et al., 2019). According to (Ilyas et al., 2019), a feature is defined as a function mapping from the input space X to real numbers, i.e f : X → IR, where IR can be the label space in classification task. Therefore, a DNN classifier can be perceived as a function utilizing a set of useful features for label prediction (Ilyas et al., 2019), where useful features in (Ilyas et al., 2019) are characterized by their positive correlation with true label, defined as:
• ρ-useful features: A feature f is ρ-useful (ρ > 0) if it is correlated with the true label in expectation, shown as follows:
IE(x,y)∼D[y · f(x)] ≥ ρ. (2)
To understand adversarial vulnerability, (Ilyas et al., 2019) further proposes to dichotomize the above useful features into robust features (RFs) and non-robust features (NRFs), defined as follows:
• Robust feature (RFs): a useful feature f is robust if there exists a γ > 0 for it to be γ-robustly useful under some specified set of valid perturbations ∆, shown as follows:
IE(x,y)∼D[ inf δ∈∆(x)
y · f(x+ δ)] ≥ γ. (3)
• Non-robust feature (NRFs): a useful feature f is non-robust if γ > 0 does not exist.
Adversarial vulnerability can be attributed to the existence of NRFs (Ilyas et al., 2019). As discussed in (Ilyas et al., 2019), adversarial vulnerability is caused by the presence of NRFs which are useful and predictive. According to (Ilyas et al., 2019), “in the presence of an adversary, any useful but non-robust features can be made anti-correlated with the true label, leading to adversarial vulnerability" (Ilyas et al., 2019). Therefore, adversarial training obtains a robust model by discouraging from learning NRFs. In practice, finding a worst-case perturbation under a certain budget for Eq. 3is not feasible since it is often an NP-hard problem (Katz et al., 2017; Weng et al., 2018), and thus (Ilyas et al., 2019) uses multi-step PGD attack to approximate such a worst-case solution when investigating NRFs.
Fig. 1 (a) summarizes the feature definition in (Ilyas et al., 2019). Specifically, the plus sign (+) indicates the useful features which has positive correlation with true labels, while the minus sign (−) indicates anti-correlated features under PGD attack.
Verifying the existence of NRFs (Ilyas et al., 2019). The procedure verifying the existence of NRFs in (Ilyas et al., 2019) is summarized in Fig. 2(a) by three steps. At Step 1, it trains a model M1 with standard training on the original training set (Xtrain, y), where Xtrain and y indicate the training sample and its corresponding true label, respectively. At Step 2, it first randomly picks a random label yrand for each training sample to ensure that the training set Xtrain has no features with a positive correlation with the random label yrand. After that, perturbation δ is generated by PGD attack on M1 by making sample prediction f(x+ δ) close to yrand. This step aims to generate a perturbation δ
which includes NRFs related to yrand. At Step 3, model M2 is trained on the new dataset (Xtrain+δ, yrand) generated at Step 2, and then evaluated on the original test dataset with true labels (Xtest, y). According to (Ilyas et al., 2019), the perturbation δ is the only connection between Xtrain + δ and yrand since there is no positive correlation between Xtrain and yrand. Therefore, if model M2 achieves higher accuracy than random prediction (e.g 10% for CIFAR10) on the original test dataset (Xtest, y) with true labels, the existence of NRFs in δ is verified. We re-implement this experiment in (Ilyas et al., 2019), and M2 achieves a accuracy of 48.16% (with five independent runs), as shown in Table 1, which verifies the existence of NRFs as in (Ilyas et al., 2019).
3.2 STRENGTH-BASED NRF CATEGORIZATION
It is widely known that PGD attack is stronger than FGSM attack, which is supported by the finding that FGSM Acc is higher than PGD Acc under the same l∞ perturbation budget (Madry et al., 2018). Thus, PGD Acc is often adopted as a common metric to evaluate the model robustness. FGSM AT is faster than PGD AT but at the cost of a mildly lower PGD Acc (than PGD AT) even when CO does not happen in FGSM AT. When CO occurs, the PGD Acc drops to a value close to zero (Phenomenon 1). Since the difference between PGD AT and FGSM AT lies in the attack variant, GradAlign (Andriushchenko & Flammarion, 2020) explains their difference based on how well the adopted attack can solve the inner maximization problem. Specifically, FGSM AT yields lower robustness because FGSM attack cannot solve the problem as accurately as PGD attack because PGD attack is stronger than its FGSM counterpart. The following discussion provides an alternative interpretation of the attack strength-based explanation in (Andriushchenko & Flammarion, 2020) from the NRF perspective.
Intuitive categorization. Considering the attack strength difference, the NRFs can be divided into two types, as shown in Figure 1(b). The first type of NRFs is named as double non-robust feature (DNRF) since it can be made anti-correlated with the true labels by both FGSM and PGD attack. The existence of DNRF explains why FGSM AT yields a more robust model than standard training against PGD attack during evaluation. By contrast, the other type of NRFs is called single non-robust feature (SNRF) since it is made anti-correlated with true labels by PGD attack but is still positive-correlated with true labels under FGSM attack.
Experimental verification of DNRF. This setup follows the procedure in (Ilyas et al., 2019) (Fig. 2(a)) with a small modification. With its definition, DNRF has the property of being made anti-correlated with true labels under both PGD attack and FGSM attack. Therefore, the existence of DNRF ensures that the test acc will
also be higher than random guess (10% for CIFAR10) if we replace the PGD attack at Step 2 with FGSM attack, as shown in Fig. 2(b). This is confirmed by an accuracy of 20.01% on the original test set, see Table 1.
On SNRF and its relationship with CO phenomena. It is challenging to directly verify the existence of SNRF. The phenomenon that 20.01% (FGSM attack) is lower than 48.61% (PGD attack) in Table 1 can be seen as an indirect evidence for the existence of SNRFs which can be extracted by PGD attack but not FGSM attack. Even though direct empirical verification of SNRF is challenging, its theoretical existence is straightforward as long as FGSM attack is weaker than PGD attack. Moreover, the weaker the FGSM attack (compared with PGD attack), the more SNRF. FGSM AT cannot effectively discourage the model from learning SNRF as PGD AT, and thus we can attribute the lower PGD Acc of FGSM AT than PGD AT to the existence of SNRF. However, SNRF under strength-based NRF categorization might (at most) partly explain CO Phenomenon 1 but cannot justify CO Phenomenon 2. The reason is that the model might have a very low PGD Acc, but FGSM Acc cannot be higher than Standard Acc even in an extreme case when all the NRFs become SNRF due to a very weak FGSM attack. The following section introduces a new NRF categorization to better explain CO phenomena, especially Phenomenon 2.
4 DIRECTION-BASED NRF CATEGORIZATION FOR UNDERSTANDING CO
PHENOMENA
Direction-based NRF categorization. Similar to the above strength-based NRF categorization, the categorization here considers FGSM attack but differs by a key assumption: whether the usefulness of certain NRFs is decreased or increased under FGSM attack. We call this NRF categorization as direction-based, which is defined as follows:
• NRF1: NRF1 is a type of NRF whose usefulness is decreased after FGSM attack, thus can be exploited by FGSM attack to decrease the classification accuracy after FGSM attack.
• NRF2: NRF2 is a type of NRF whose usefulness is increased after FGSM attack, thus can be exploited by FGSM attack to increase the classification accuracy after FGSM attack.
NRF1 and NRF2 still follow the definition of NRF regarding PGD attack. In other words, the usefulness of both NRF1 and NRF2 is decreased after PGD attack. The change of their usefulness after attacks is summarized in Fig. 3, where the increase and decrease of feature usefulness are denoted by the ↑ and ↓, respectively.
4.1 ON NRF2 EXISTENCE AND ITS EXPLANATION FOR PHENOMENON 2
When we discuss DNRF and SNRF in Section 3.2, by default, we assume that their usefulness is decreased after FGSM attack, and thus they can be seen as NRF1. In other words, the existence of NRF1 is straightforward; however, it is unclear whether NRF2 actually exists.
Conjecture 1: We conjecture that there exists NRF2, and the FGSM attack in AT encourages the model to learn NRF2.
Differences between verifying NRF1 and NRF2. The experimental procedure of verifying NRF2 is shown in Fig. 2(c). The key reason why procedures in Fig. 2 can verify the existence of certain NRFs is that the generated perturbation δ is the only connection between Xtrain+ δ and yrand, and it should include certain NRFs related to yrand. In other words, the usefulness of certain NRFs should be increased after attack at Step 2 of Fig. 2. For FGSM attack, f(x+ δ) is optimized to be far from the true label y by maximizing the loss l(f(x+ δ), y), and the usefulness of NRF1 and NRF2 are decreased and increased by definition, respectively(see Fig. 3). Therefore, to increase the usefulness of NRF1 at Step 2, the optimization goal should be close to yrand, as shown in Fig. 2(b). By contrast, to verify the existence of NRF2, the optimization goal at Step 2 should follow that of FGSM attack, i.e far from yrand, as shown in Fig. 2(c), which increases the usefulness of NRF2.
Verification of Conjecture 1. As discussed above, verifying the existence of NRF2 requires an opposite optimization goal with that of NRF1 at Step 2 (see Fig. 2(c)). For the model M1 at Step 1, we adopt FGSM AT with the results reported in Table 2. When M1 at Step 1 is set to a CO model with FGSM AT, our model M2 evaluated on the original test set achieves an accuracy of around
17.52%± 1.59% (with five independent runs), which verifies the existence of NRF2 since it is higher than 10% (random prediction for CIFAR10). Given that the usefulness of NRF2 is increased after FGSM attack, the existence of NRF2 in a model after CO justifies why FGSM Acc can be higher than Standard Acc (CO Phenomenon 2). Moreover, we experiment with setting the M1 at Step 1 to one with standard training, the Test Acc of M2 is close to random prediction, suggesting that it is the FGSM attack in AT that encourages the model to learn NRF2.
Does NRF2 exist in a model before CO in FGSM AT? It is interesting to investigate whether NRF2 exists for the FGSM AT model before CO. To this end, we set M1 at Step 1 of Fig. 2(c) to a FGSM AT model saved before CO, which yields an M2 with an accuracy close to random prediction(see FGSM AT(Before CO) in Table 2). This indicates that NRF2 mainly exists in the model after CO in FGSM AT, which further confirms the relationship between CO and NRF2.
Can NRF2 be exploited by FGSM attack to decrease FGSM accuracy? (Kim et al., 2020) reports that FGSM Acc is higher than Standard Acc when CO happens in FGSM AT. Here, we show that this is not always the case if we evaluate FGSM Acc of the CO model with different step sizes, as shown in Table 3. Note that the result with step size of zero indicates the Standard Acc. We find that FGSM Acc is higher (lower) than Standard Acc when the step size is relatively large (small). The results suggest that NRF2 can still be exploited by an FGSM attack with a smaller step size to be anti-correlated with the true label. In other words, step size plays a non-trivial role when FGSM attack exploiting NRF2. The above results well explain why CO only occurs when the step size is set to a relatively large value (Wong et al., 2020).
Table 3: Evaluate FGSM Acc of CO model under different step sizes.
step size (/255) 0 1 2 3 4 5 6 7 8
FGSM Acc (%) 85.77 37.27 41.57 66.78 86.93 95.16 96.94 96.63 94.72
A dynamic view on the CO from the
NRF2 perspective. Prior works analyzing CO mainly focus on Phenomenon 1 about low PGD Acc, which seems to be a pseudostatic state since the PGD Acc stays at zero after CO. Here, we investigate CO model further by analyzing FGSM Acc at different epochs, as shown in Figure 4. Fig. 4 show that the FGSM Acc under large step sizes consistently gets higher with more
training epochs, suggesting the model continues to rely more on NRF2. In other words, CO can be perceived as a dynamic state of learning NRF2, which does not stop after the drop of PGD Acc. This is reasonable because NRF2 can be very useful features under FGSM attack.
4.2 CAN NRF2 ALSO JUSTIFY PHENOMENON 1?
The above analysis verifies the existence of NRF2 in a CO model, which well justifies the improved accuracy after FGSM attack (Phenomenon 2). Here, we discuss whether it can be used to justify Phenomenon 1. Regarding the relationship between NRF2 and PGD Acc, we formulate the following conjecture.
Conjecture 2: We conjecture that NRF2 can be a cause of a significant PGD Acc drop.
Verification of Conjecture 2. To verify Conjecture 2, we finetune a robust model on a training dataset with and without such NRF2, respectively, and evaluate the PGD Acc on the original test set with true labels (Xtest, y). To minimize the influence of other NRF types, we adopt a model pretrained by PGD AT, which mainly has RFs, for the finetuning experiment. Specifically, we adopt the generated new training dataset (X + δ, yrand) at Step 2 of Fig. 2(c) as the one with NRF2. For the counterpart dataset without NRF2, we remove the added perturbation δ, and thus (X , yrand) is
used for training. The basic loss is set to cross-entropy (CE) to encourage learning the features, if any, in the generated dataset. However, the accuracy will quickly reduce to zero due to the random choice of yrand. Thus, a KL loss, which encourages the output of the finetuned model to be close to that the pretrained mode, is added on top of the CE loss to encourage the model in the finetuning process to maintain the original RFs. The total loss is shown as follows:
Lossfinetune = CE(f(x+ δ; θ), yrand) + λ ∗KL(f(x+ δ; θpretrain), f(x+ δ; θ)), (4)
where f(x; θpretrain) and f(x; θ) indicate the pretrained PGD model and finetuning model, respectively. A detailed setup for this experiment is reported in the Appendix. The results with λ set to 5 are shown in Fig. 5. We observe that the PGD Acc can be maintained around 25% after 30 epochs of finetuning for the dataset (X , yrand) which contains no features. By contrast, under the same setting, the PGD Acc quickly decreases to a value close to zero for the generated dataset (X + δ, yrand) which contains NRF2. The contrasting results verify the claim in Conjecture 2.
Additional results with other λ values in Equation 4 are report in Fig. 6. As λ gets larger, the model finetuned on (X , yrand) maintains more RFs learned in pretrained weights θpretrain, leading to an increase in accuracy. However, the PGD Acc for the model finetuned on (X + δ, yrand) (with NRF2) is zero for a wide range of λ values, which is much lower than the model finetuned on (X , yrand) (without NRF2). This further verifies the claim in Conjecture 2. Interestingly, the result in Fig. 6 can also be viewed as another proof for Conjeture 1. The Standard Acc, evaluated on the original test set (Xtest, y), is higher for the model finetuned on (X + δ, yrand) is higher than its counterpart on (X , yrand) for all λ in Fig. 6(b). This finding aligns with Conjecture 1 that there exists a type of NRF, which can be encouraged under FGSM attack.
Discussion on the drop speed of PGD Acc from the NRF2 perspective. As demonstrated in Section 4.1, CO can be perceived as a dynamic process of learning NRF2. With this insight, learning NRF2 in FGSM AT does not occur suddenly, which is supported by the finding that FGSM Acc still increases even after CO happens. If this is the case, how can we justify the sudden drop of PGD Acc within one epoch? At first sight, it seems that NRF2 can only explain the PGD Acc drop but not its drop speed. However, we argue that the sudden drop of PGD Acc is due to the worst-case property of PGD attack. Note that PGD attack seeks the most effective adversarial perturbation with multiple iterations to fool the model by exploiting the most vulnerable features in the model. In other words, the model is already vulnerable to PGD attack even if it only learns a small amount of NRF2 (one epoch regarding CO, for instance). After the PGD Acc drops to zero, the model continues to learn more NRF2, leading to a higher FGSM Acc.
5 NRF2 HELPS EXPLAIN HOW SOTA METHODS PREVENT CO
A recent work (Zhang et al., 2022) outperforms prior methods in FGSM AT by a large margin without additional computation overhead. Specifically, it shows that adding noise to the input (instead of initializing the adversarial perturbation with noise as in (Wong et al., 2020)) is critical for its success (Zhang et al., 2022). A similar finding has also been reported in another recent work (de Jorge
et al., 2022). However, why such a simple technique of adding noise on the images is so effective remains not fully clear. Here, we show that NRF2 sheds new light on their success.
Intuitively, the model tends to learn those features that are useful for prediction. Therefore, PGD AT mainly learns RFs because NRFs are not useful under PGD attack. With FGSM AT, the model is encouraged to learn NRF2 because FGSM attack increases its usefulness. Moreover, with our analysis in Section 4, CO can be seen as a dynamic process of learning NRF2. Therefore, the key to preventing CO in FGSM AT lies in decreasing the NRF2 usefulness under FGSM attack. Regarding why adding noise to the image input prevents CO, we establish the following hypothesis.
Conjecture 3: We conjecture that adding noise to the input decreases the usefulness of NRF2 under FGSM attack (indicated by FGSM Acc).
Verification of Conjecture 3. For facilitating the discussion, we divide all types of features into NRF2 and non-NRF2. A CO model has both NRF2 and non-NRF2, while a non-CO model mainly has non-NRF2. We evaluate the performance on a model without or with random noise added to the input and calculate the noise-induced change of Standard Acc and FGSM Acc (Table 4). Note that for FGSM Acc with noise, the noise is added to the input before the FGSM attack following (Zhang et al., 2022). For the model before CO, the noise has almost the same influence on the change of Standard Acc and FGSM Acc, i.e ▽SA of −0.70% (Standard Acc change) is close to
▽FGSM of −1.30%. We further conduct the same experiment on a CO model. Before adding noise, the FGSM Acc (94.67%) is higher than its standard Acc (85.77%), which can be attributed to NRF2 as in Conjecture 1. After adding noise, this trend is reversed (57.37% < 84.55%), suggesting Phenomenon 2 disappears in this setup. Moreover, ▽FGSM (−37.30%) is much more significant than ▽SA (−1.22%). Such a significant drop of FGSM Acc (▽FGSM ) on a CO model (with NRF2) suggests that the NRF2 usefulness under FGSM attack is significantly decreased. Fig. 7 visualizes ▽FGSM and ▽SA of different noise sizes, which shows the same trend with Table 4 that ▽FGSM of CO model is the most significant change among all settings. Therefore, Conjecture 3 is verified, which provides a new understanding on why input noise prevents CO.
More discussion on NRF2 explaining earlier attempts of mitigating CO. Even though we mainly apply our NRF2 to understanding the SOTA technique of input noise in recent works (Zhang et al., 2022; de Jorge et al., 2022), it also well justifies earlier successful attempts. For example, the success of random initialization in (Wong et al., 2020) is conceptually similar to adding the noise on the input but the noise magnitude is limited by the allowable perturbation size. (Kim et al., 2020) alleviates CO by limiting the step size, which aligns well with our finding in Table 3. (Li et al., 2020) avoids CO by switching to PGD AT after detecting the occurrence of CO, the success of which is expected since PGD attack can effectively discourage the model from learning NRF2.
6 CONCLUSION
The reason for CO in FGSM AT remains not fully clear despite various attempts to mitigate it. In contrast to prior works mainly studying PGD Acc drop to understand CO, our work focuses on another intriguing phenomenon that FGSM Acc is higher than Standard Acc. We have found that there exists NRF2 whose usefulness is decreased under FGSM attack and CO can be seen as a dynamic process of learning such a type of NRF. Our investigation has also provided a new understanding of successful attempts on how to mitigate CO in recent works.
A APPENDIX
Experimental setups for NRF categorization in Fig. 2. At Step 1, we follow the settings in (Andriushchenko & Flammarion, 2020), and train M1 on CIFAR10 for 30 epochs and cyclic learning rate with the maximum learning rate 0.3. Both attack radius sizes for training at Step 1 and perturbation generation at Step 2 are set as 8/255. Based on the new dataset (X + δ, yrand), M2 is trained for 30 epochs with a constant learning rate 0.015.
Experimental setups for the finetuning experiment in Section 4.2. The first two steps of the finetuning experiment follow the same settings of that in Fig. 2, generating a new dataset (X + δ, yrand). at Step 3 , we first follow the settings of PGD AT in (Andriushchenko & Flammarion, 2020) and train a robust Mpgd. Based on the new dataset (X + δ, yrand), M2 is trained by finetuning on Mpgd for 30 epochs with a constant learning rate 0.005. | 1. What is the focus of the paper regarding Non Robust Features (NRF) and Catastrophic Overfitting (CO)?
2. What are the strengths of the proposed approach in understanding CO?
3. What are the weaknesses of the paper regarding the clarity, quality, novelty, and reproducibility of its content?
4. Do you have any concerns or questions about the paper's explanations and experiments? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
Based on the finding of Non Robust Features (NRF) in the previous work (Ilyas et al., 2019), this paper tries to explain the phenomenon of Catastrophic Overfitting (CO) in FGSM-based adversarial training. It categorizes NRFs into different types and tried to design experiments to show that the two commonly observed phenomena of CO, i.e., 1) the PGD Acc drops to a value close to zero when CO happens, and 2) FGSM Acc is higher than Standard Acc, is due to a specific type of NRFs.
Strengths And Weaknesses
S1. The paper is well-written in general S2. The idea of categorizing NRFs to understand CO is new
Clarity, Quality, Novelty And Reproducibility
W1. The existence of NRF2 is not justified because its relation with DNRF is unclear. I don't event understand why you name DNRF "double" as FGSM is clearly not the same as PGD with only one iteration. The latter considers gradient magnitude .
W2. The proposed explanation lack justification. Many experiments only reproduce previous results without justifying that the reasons are due to the proposed conjecture. Merely showing gaps in Acc is not sufficient to explain that CO phenomena happen as the ways you thought. W3. The paper points out some ways to prevent CO, like adding noises. But there is no discussion on why adding noise prevents CO. |
ICLR | Title
Understanding Catastrophic Overfitting in Fast Adversarial Training From a Non-robust Feature Perspective
Abstract
To make adversarial training (AT) computationally efficient, FGSM AT has attracted significant attention. The fast speed, however, is achieved at the cost of catastrophic overfitting (CO), whose reason remains unclear. Prior works mainly study the phenomenon of a significant PGD accuracy (Acc) drop to understand CO while paying less attention to its FGSM Acc. We highlight an intriguing CO phenomenon that FGSM Acc is higher than accuracy on clean samples and attempt to apply non-robust feature (NRF) to understand it. Our investigation of CO by extending the existing NRF into fine-grained categorization suggests: there exists a certain type of NRF whose usefulness is increased after FGSM attack, and CO in FGSM AT can be seen as a dynamic process of learning such NRF. Therefore, the key to preventing CO lies in reducing its usefulness under FGSM AT, which sheds new light on understanding the success of a SOTA technique for mitigating CO.
1 INTRODUCTION
Despite impressive performance, deep neural networks (DNNs) (LeCun et al., 2015; He et al., 2016; Huang et al., 2017; Zhang et al., 2019a; 2021) are widely recognized to be vulnerable to adversarial examples (Szegedy et al., 2013; Biggio et al., 2013; Akhtar & Mian, 2018). Without giving a false sense of robustness against adversarial attacks (Carlini & Wagner, 2017; Athalye et al., 2018; Croce & Hein, 2020), adversarial training (AT) (Madry et al., 2018; Zhang et al., 2019c) has become the de facto standard approach for obtaining an adversarially robust model via solving a min-max problem in two-step manner. Specifically, it first generates adversarial examples by maximizing the loss, then trains the model on the generated adversarial examples by minimizing the loss. PGD-N AT (Madry et al., 2018; Zhang et al., 2019c) is a classical AT method, where N is the iteration steps when generating the adversarial samples in inner maximization. Notably, PGD-N AT is N times slower than its counterpart standard training with clean samples. A straightforward approach to make AT faster is to set N to 1, i.e reducing the attack in the inner maximization from multi-step PGD to single-step FGSM (Goodfellow et al., 2015). For simplicity, PGD-based AT and FGSM-based fast AT are termed PGD AT and FGSM AT, respectively.
FGSM AT often fails with a sudden robustness drop against PGD attack while maintaining its robustness against FGSM attack, which is called catastrophic overfitting (CO) (Wong et al., 2020). With Standard Acc denoting the accuracy on clean samples while FGSM Acc and PGD Acc indicating the accuracy under FGSM and PGD attack, we emphasize that a CO model is characterized by two main phenomena as follows.
• Phenomenon 1: The PGD Acc drops to a value close to zero when CO happens (Wong et al., 2020; Andriushchenko & Flammarion, 2020).
• Phenomenon 2: FGSM Acc is higher than Standard Acc for a CO model (Kim et al., 2020; Andriushchenko & Flammarion, 2020).
Multiple works (Wong et al., 2020; Kim et al., 2020; Andriushchenko & Flammarion, 2020) have focused on understanding CO by explaining the drop of PGD Acc in Phenomenon 1; however, they pay less attention to Phenomenon 2 regarding FGSM Acc. Specifically for Phenomenon 1, FGSM-RS (Wong et al., 2020) attributes it to the lack of perturbation diversity in FGSM AT, which
is refuted by a follow-up GradAlign (Andriushchenko & Flammarion, 2020) by demonstrating a co-occurrence of local non-linearity and the PGD Acc drop. However, these understandings cannot explain why FGSM Acc is higher than Standard Acc for a CO model in Phenomenon 2.
In the context of adversarial learning, numerous works (Goodfellow et al., 2015; Tabacof & Valle, 2016; Tanay & Griffin, 2016; Koh & Liang, 2017; Nakkiran, 2019; Athalye et al., 2018; Zhang et al., 2020) have attempted to explain why adversarial examples exist from different angles, among which non-robust feature (NRF) (Ilyas et al., 2019) is a popular one which also aligns well with all other explanations (Goodfellow et al., 2015; Tabacof & Valle, 2016; Tanay & Griffin, 2016; Koh & Liang, 2017; Nakkiran, 2019; Athalye et al., 2018). Such compatibility suggests that the NRF perspective constitutes an essential tool for understanding adversarial vulnerability, to which CO is also directly related. Specifically, the authors of (Ilyas et al., 2019) define the positive-correlation between features and true labels as feature usefulness (see Section 3.1 for more detailed definitions). Therefore, the adversarial vulnerability of DNNs is attributed to the existence of non-robust features (NRFs), which can be made anti-correlated with the true label under adversary. This understanding of NRFs in (Ilyas et al., 2019) well aligns with the fact that a CO model achieves close to zero robustness against PGD attack, and thus motivates us to believe that the NRF perspective might be an auspicious direction for understanding CO in FGSM AT.
The NRF in (Ilyas et al., 2019) is defined with PGD attack, which is followed in this work; however, we extend their NRF framework by additionally considering FGSM attack for fine-grained categorization. Considering the difference of adversarial attack strength between FGSM and PGD attack, GradAlign (Andriushchenko & Flammarion, 2020) explains Phenomenon 1 by demonstrating how well the attack variant (FGSM or PGD attack) can solve the inner maximization problem in AT. We start our investigation by providing an alternative interpretation of this adversarial strength difference between the two attack variants within the NRF framework (Ilyas et al., 2019), named strength-based NRF categorization. Despite aligning well with Phenomenon 1, We find that this strength-based categorization cannot explain Phenomenon 2 since the usefulness of these NRFs is decreased under FGSM attack and leads to an decrease (instead of increase in Phenomenon 2) of classification accuracy on FGSM adversarial examples than clean samples.
To understand Phenomenon 2 in CO from the NRF perspective, we conjecture that there exists a type of NRF whose usefulness is increased under FGSM attack, thus can lead to a higher FGSM Acc than Standard Acc (Phenomenon 2). In other words, if such type of NRFs (NRF2 in the following categorization) exists, Phenomenon 2 can be justified. Considering whether the usefulness is decreased or increased under FGSM attack, we propose a direction-based NRF categorization where NRF2 (NRF1) leads to the increase (decrease) of classification accuracy under FGSM attack. To prove the existence of NRF2, we follow the procedure of verifying the existence of NRF in (Ilyas et al., 2019). Moreover, we show that NRF2 can cause a significant PGD Acc drop , which also helps justify Phenomenon 1 in CO.
Overall, towards understanding CO in FGSM AT, our contributions are summarized as follows:
• Our work shifts the previous focus on PGD Acc in Phenomenon 1 to FGSM Acc in Phenomenon 2 for understanding CO. Given NRF as a popular perspective on adversarial vulnerability, we are the first to attempt at applying it to explain Phenomenon 2.
• We extend the existing NRF framework under PGD attack (Ilyas et al., 2019) to more fine-grained NRF categorization by FGSM attack. We verify the existence of NRF2 and show that its existence well justifies Phenomenon 2 (as well as Phenomenon 1).
• Very recent works show that adding noise on the image input achieves SOTA performance for FGSM AT. However, their mechanism of such a simple technique preventing CO remains not fully clear, for which our NRF2 perspective shed new light on its success.
2 PROBLEM OVERVIEW AND RELATED WORK
2.1 FGSM AT AND EXPERIMENTAL SETUPS
Let D denote a data distribution with (x, y) pairs and f(·, θ) parameterized by θ denote a deep model. For standard training, the model f(·, θ) is trained on D by minimizing E(x,y)∼D[l(f(x, θ), y)], where l indicates a cross-entropy loss for a typical multi-class classification task. Adversarial training
(AT) (Madry et al., 2018) for obtaining a robust model is formalized as a min-max optimization problem:
argmin θ E(x,y)∼D [ max δ∈S l(f(x+ δ; θ), y) ] , (1)
where S is a perturbation limitation (ϵ with the l∞ constraint in this work). The outer minimization problem in AT is often the same as standard training; however, AT has an unique inner maximization problem that seeks a perturbation inside the S for maximizing the optimization loss. PGD AT and FGSM AT are two typical adversarial training methods with PGD attack and FGSM attack solving the inner maximization problem, respectively.
Experimental setups. Unless specified, we follow the settings in GradAlign (Andriushchenko & Flammarion, 2020) during training and evaluation. The experiments are conducted on CIFAR10 with PreAct ResNet-18, trained for 30 epochs with cyclic learning rates and half-precision training. We adopt SGD optimizer with weight decay 5 × 10−4, and the maximum learning rate is set to 0.2. ℓ∞ attack with perturbation constraint ϵ=8/255 is applied in both training and evaluation. Following (Wong et al., 2020; Andriushchenko & Flammarion, 2020), we calculate the Standard accuracy (Standard Acc) on clean samples, FGSM accuracy (FGSM Acc), and PGD accuracy (PGD Acc) under PGD-50-10 attack (performing PGD-50 attack with ten restarts and step size α = ϵ/4) for evaluation.
2.2 CATASTROPHIC OVERFITTING IN FGSM AT
What are the CO Phenomena? Notably, a model trained only on adversarial examples generated by FGSM attack in FGSM AT still has robustness against PGD attack. In practice, this robustness is only slightly lower than that of much more computationally expensive PGD AT. However, this robustness level can often not be maintained till the end of training as a classical PGD AT. Specifically, as the FGSM AT evolves, the model robustness against PGD attack first increases but then enters a phase where the robustness quickly drops to and stays at zero. Following (Wong et al., 2020), this phase is termed catastrophic overfitting (CO). Another intriguing phenomenon related to CO is that for a model at the phase of CO, it achieves a higher FGSM Acc than Standard Acc (Kim et al., 2020; Andriushchenko & Flammarion, 2020). We term these two phenomena regarding CO as Phenomenon 1 and Phenomenon 2 respectively, as in Section 1.
How to explain the CO phenomena? With the finding that random initialization of perturbation helps alleviate CO (Wong et al., 2020), a tempting explanation suggests that the CO in FGSM AT lies in the lack of perturbation diversity, which has been refuted by (Andriushchenko & Flammarion, 2020). Instead, it attributes the reason for the PGD Acc drop to local non-linearity, which is quantified by the gradient alignment: cos(∇xℓ(x, y; θ),∇xℓ(x+η, y; θ)). The local non-linearity (low gradient alignment) indicates a low linear approximation quality of FGSM perturbations to PGD perturbations. In other words, local non-linearity means that the inner maximization problem in Eq 1 cannot be solved accurately by FGSM. It is demonstrated in (Andriushchenko & Flammarion, 2020) that local linearity decreases significantly when CO happens in FGSM AT. Their perspective is mainly dependent on the co-occurrence between non-linearity and the drop of PGD Acc. In other words, the non-linearity perspective exclusively focuses on explaining Phenomenon 1, for which this work provides an alternative NRF explanation (see Section 3). More importantly, our work fills the gap to explain Phenomenon 2 from a NRF perspective (see Section 4).
How to prevent CO? With the focus on Phenomenon 1, numerous works have attempted to prevent CO. Fast AT (Wong et al., 2020) is the first to show FGSM AT can achieve comparable robustness as PGD AT of “free" variants (Shafahi et al., 2019; Zhang et al., 2019b). A follow-up work (Andriushchenko & Flammarion, 2020) shows that CO still occurs in (Wong et al., 2020) when the step size increases and introduces a regularization loss (GradAlign) for maximizing local linearity to avoid CO. Other successful attempts for avoiding CO include adaptive perturbation size (Kim et al., 2020), dynamic dropout scheduling (Vivek & Babu, 2020) and detection-based alternating strategy (Li et al., 2020). Intriguingly, very recent works (Zhang et al., 2022; de Jorge et al., 2022) have shown that adding noise on the image input is sufficient for preventing collapse and achieves SOTA performance. However, the reason for its success remains not fully clear, for which our NRF perspective with direction-based categorization provides an explanation (see Section 5).
3 NON-ROBUST FEATURE PERSPECTIVE ON ADVERSARIAL TRAINING
Before investigating CO from the NRF perspective, we first revisit the definition and methodology of robust and non-robust features defined in (Ilyas et al., 2019) (Fig. 1(a)). Considering the difference of attack strength between FGSM attack and PGD attack, we extend the non-robust features defined in (Ilyas et al., 2019) to a fine-grained categorization under FGSM attack (strength-based categorization in Fig. 1(b)) and discuss its relationship with CO phenomena.
3.1 BACKGROUND ON FEATURE USEFULNESS AND ROBUSTNESS
Here, we revisit the definitions and methodology of DNN features introduced in (Ilyas et al., 2019). According to (Ilyas et al., 2019), a feature is defined as a function mapping from the input space X to real numbers, i.e f : X → IR, where IR can be the label space in classification task. Therefore, a DNN classifier can be perceived as a function utilizing a set of useful features for label prediction (Ilyas et al., 2019), where useful features in (Ilyas et al., 2019) are characterized by their positive correlation with true label, defined as:
• ρ-useful features: A feature f is ρ-useful (ρ > 0) if it is correlated with the true label in expectation, shown as follows:
IE(x,y)∼D[y · f(x)] ≥ ρ. (2)
To understand adversarial vulnerability, (Ilyas et al., 2019) further proposes to dichotomize the above useful features into robust features (RFs) and non-robust features (NRFs), defined as follows:
• Robust feature (RFs): a useful feature f is robust if there exists a γ > 0 for it to be γ-robustly useful under some specified set of valid perturbations ∆, shown as follows:
IE(x,y)∼D[ inf δ∈∆(x)
y · f(x+ δ)] ≥ γ. (3)
• Non-robust feature (NRFs): a useful feature f is non-robust if γ > 0 does not exist.
Adversarial vulnerability can be attributed to the existence of NRFs (Ilyas et al., 2019). As discussed in (Ilyas et al., 2019), adversarial vulnerability is caused by the presence of NRFs which are useful and predictive. According to (Ilyas et al., 2019), “in the presence of an adversary, any useful but non-robust features can be made anti-correlated with the true label, leading to adversarial vulnerability" (Ilyas et al., 2019). Therefore, adversarial training obtains a robust model by discouraging from learning NRFs. In practice, finding a worst-case perturbation under a certain budget for Eq. 3is not feasible since it is often an NP-hard problem (Katz et al., 2017; Weng et al., 2018), and thus (Ilyas et al., 2019) uses multi-step PGD attack to approximate such a worst-case solution when investigating NRFs.
Fig. 1 (a) summarizes the feature definition in (Ilyas et al., 2019). Specifically, the plus sign (+) indicates the useful features which has positive correlation with true labels, while the minus sign (−) indicates anti-correlated features under PGD attack.
Verifying the existence of NRFs (Ilyas et al., 2019). The procedure verifying the existence of NRFs in (Ilyas et al., 2019) is summarized in Fig. 2(a) by three steps. At Step 1, it trains a model M1 with standard training on the original training set (Xtrain, y), where Xtrain and y indicate the training sample and its corresponding true label, respectively. At Step 2, it first randomly picks a random label yrand for each training sample to ensure that the training set Xtrain has no features with a positive correlation with the random label yrand. After that, perturbation δ is generated by PGD attack on M1 by making sample prediction f(x+ δ) close to yrand. This step aims to generate a perturbation δ
which includes NRFs related to yrand. At Step 3, model M2 is trained on the new dataset (Xtrain+δ, yrand) generated at Step 2, and then evaluated on the original test dataset with true labels (Xtest, y). According to (Ilyas et al., 2019), the perturbation δ is the only connection between Xtrain + δ and yrand since there is no positive correlation between Xtrain and yrand. Therefore, if model M2 achieves higher accuracy than random prediction (e.g 10% for CIFAR10) on the original test dataset (Xtest, y) with true labels, the existence of NRFs in δ is verified. We re-implement this experiment in (Ilyas et al., 2019), and M2 achieves a accuracy of 48.16% (with five independent runs), as shown in Table 1, which verifies the existence of NRFs as in (Ilyas et al., 2019).
3.2 STRENGTH-BASED NRF CATEGORIZATION
It is widely known that PGD attack is stronger than FGSM attack, which is supported by the finding that FGSM Acc is higher than PGD Acc under the same l∞ perturbation budget (Madry et al., 2018). Thus, PGD Acc is often adopted as a common metric to evaluate the model robustness. FGSM AT is faster than PGD AT but at the cost of a mildly lower PGD Acc (than PGD AT) even when CO does not happen in FGSM AT. When CO occurs, the PGD Acc drops to a value close to zero (Phenomenon 1). Since the difference between PGD AT and FGSM AT lies in the attack variant, GradAlign (Andriushchenko & Flammarion, 2020) explains their difference based on how well the adopted attack can solve the inner maximization problem. Specifically, FGSM AT yields lower robustness because FGSM attack cannot solve the problem as accurately as PGD attack because PGD attack is stronger than its FGSM counterpart. The following discussion provides an alternative interpretation of the attack strength-based explanation in (Andriushchenko & Flammarion, 2020) from the NRF perspective.
Intuitive categorization. Considering the attack strength difference, the NRFs can be divided into two types, as shown in Figure 1(b). The first type of NRFs is named as double non-robust feature (DNRF) since it can be made anti-correlated with the true labels by both FGSM and PGD attack. The existence of DNRF explains why FGSM AT yields a more robust model than standard training against PGD attack during evaluation. By contrast, the other type of NRFs is called single non-robust feature (SNRF) since it is made anti-correlated with true labels by PGD attack but is still positive-correlated with true labels under FGSM attack.
Experimental verification of DNRF. This setup follows the procedure in (Ilyas et al., 2019) (Fig. 2(a)) with a small modification. With its definition, DNRF has the property of being made anti-correlated with true labels under both PGD attack and FGSM attack. Therefore, the existence of DNRF ensures that the test acc will
also be higher than random guess (10% for CIFAR10) if we replace the PGD attack at Step 2 with FGSM attack, as shown in Fig. 2(b). This is confirmed by an accuracy of 20.01% on the original test set, see Table 1.
On SNRF and its relationship with CO phenomena. It is challenging to directly verify the existence of SNRF. The phenomenon that 20.01% (FGSM attack) is lower than 48.61% (PGD attack) in Table 1 can be seen as an indirect evidence for the existence of SNRFs which can be extracted by PGD attack but not FGSM attack. Even though direct empirical verification of SNRF is challenging, its theoretical existence is straightforward as long as FGSM attack is weaker than PGD attack. Moreover, the weaker the FGSM attack (compared with PGD attack), the more SNRF. FGSM AT cannot effectively discourage the model from learning SNRF as PGD AT, and thus we can attribute the lower PGD Acc of FGSM AT than PGD AT to the existence of SNRF. However, SNRF under strength-based NRF categorization might (at most) partly explain CO Phenomenon 1 but cannot justify CO Phenomenon 2. The reason is that the model might have a very low PGD Acc, but FGSM Acc cannot be higher than Standard Acc even in an extreme case when all the NRFs become SNRF due to a very weak FGSM attack. The following section introduces a new NRF categorization to better explain CO phenomena, especially Phenomenon 2.
4 DIRECTION-BASED NRF CATEGORIZATION FOR UNDERSTANDING CO
PHENOMENA
Direction-based NRF categorization. Similar to the above strength-based NRF categorization, the categorization here considers FGSM attack but differs by a key assumption: whether the usefulness of certain NRFs is decreased or increased under FGSM attack. We call this NRF categorization as direction-based, which is defined as follows:
• NRF1: NRF1 is a type of NRF whose usefulness is decreased after FGSM attack, thus can be exploited by FGSM attack to decrease the classification accuracy after FGSM attack.
• NRF2: NRF2 is a type of NRF whose usefulness is increased after FGSM attack, thus can be exploited by FGSM attack to increase the classification accuracy after FGSM attack.
NRF1 and NRF2 still follow the definition of NRF regarding PGD attack. In other words, the usefulness of both NRF1 and NRF2 is decreased after PGD attack. The change of their usefulness after attacks is summarized in Fig. 3, where the increase and decrease of feature usefulness are denoted by the ↑ and ↓, respectively.
4.1 ON NRF2 EXISTENCE AND ITS EXPLANATION FOR PHENOMENON 2
When we discuss DNRF and SNRF in Section 3.2, by default, we assume that their usefulness is decreased after FGSM attack, and thus they can be seen as NRF1. In other words, the existence of NRF1 is straightforward; however, it is unclear whether NRF2 actually exists.
Conjecture 1: We conjecture that there exists NRF2, and the FGSM attack in AT encourages the model to learn NRF2.
Differences between verifying NRF1 and NRF2. The experimental procedure of verifying NRF2 is shown in Fig. 2(c). The key reason why procedures in Fig. 2 can verify the existence of certain NRFs is that the generated perturbation δ is the only connection between Xtrain+ δ and yrand, and it should include certain NRFs related to yrand. In other words, the usefulness of certain NRFs should be increased after attack at Step 2 of Fig. 2. For FGSM attack, f(x+ δ) is optimized to be far from the true label y by maximizing the loss l(f(x+ δ), y), and the usefulness of NRF1 and NRF2 are decreased and increased by definition, respectively(see Fig. 3). Therefore, to increase the usefulness of NRF1 at Step 2, the optimization goal should be close to yrand, as shown in Fig. 2(b). By contrast, to verify the existence of NRF2, the optimization goal at Step 2 should follow that of FGSM attack, i.e far from yrand, as shown in Fig. 2(c), which increases the usefulness of NRF2.
Verification of Conjecture 1. As discussed above, verifying the existence of NRF2 requires an opposite optimization goal with that of NRF1 at Step 2 (see Fig. 2(c)). For the model M1 at Step 1, we adopt FGSM AT with the results reported in Table 2. When M1 at Step 1 is set to a CO model with FGSM AT, our model M2 evaluated on the original test set achieves an accuracy of around
17.52%± 1.59% (with five independent runs), which verifies the existence of NRF2 since it is higher than 10% (random prediction for CIFAR10). Given that the usefulness of NRF2 is increased after FGSM attack, the existence of NRF2 in a model after CO justifies why FGSM Acc can be higher than Standard Acc (CO Phenomenon 2). Moreover, we experiment with setting the M1 at Step 1 to one with standard training, the Test Acc of M2 is close to random prediction, suggesting that it is the FGSM attack in AT that encourages the model to learn NRF2.
Does NRF2 exist in a model before CO in FGSM AT? It is interesting to investigate whether NRF2 exists for the FGSM AT model before CO. To this end, we set M1 at Step 1 of Fig. 2(c) to a FGSM AT model saved before CO, which yields an M2 with an accuracy close to random prediction(see FGSM AT(Before CO) in Table 2). This indicates that NRF2 mainly exists in the model after CO in FGSM AT, which further confirms the relationship between CO and NRF2.
Can NRF2 be exploited by FGSM attack to decrease FGSM accuracy? (Kim et al., 2020) reports that FGSM Acc is higher than Standard Acc when CO happens in FGSM AT. Here, we show that this is not always the case if we evaluate FGSM Acc of the CO model with different step sizes, as shown in Table 3. Note that the result with step size of zero indicates the Standard Acc. We find that FGSM Acc is higher (lower) than Standard Acc when the step size is relatively large (small). The results suggest that NRF2 can still be exploited by an FGSM attack with a smaller step size to be anti-correlated with the true label. In other words, step size plays a non-trivial role when FGSM attack exploiting NRF2. The above results well explain why CO only occurs when the step size is set to a relatively large value (Wong et al., 2020).
Table 3: Evaluate FGSM Acc of CO model under different step sizes.
step size (/255) 0 1 2 3 4 5 6 7 8
FGSM Acc (%) 85.77 37.27 41.57 66.78 86.93 95.16 96.94 96.63 94.72
A dynamic view on the CO from the
NRF2 perspective. Prior works analyzing CO mainly focus on Phenomenon 1 about low PGD Acc, which seems to be a pseudostatic state since the PGD Acc stays at zero after CO. Here, we investigate CO model further by analyzing FGSM Acc at different epochs, as shown in Figure 4. Fig. 4 show that the FGSM Acc under large step sizes consistently gets higher with more
training epochs, suggesting the model continues to rely more on NRF2. In other words, CO can be perceived as a dynamic state of learning NRF2, which does not stop after the drop of PGD Acc. This is reasonable because NRF2 can be very useful features under FGSM attack.
4.2 CAN NRF2 ALSO JUSTIFY PHENOMENON 1?
The above analysis verifies the existence of NRF2 in a CO model, which well justifies the improved accuracy after FGSM attack (Phenomenon 2). Here, we discuss whether it can be used to justify Phenomenon 1. Regarding the relationship between NRF2 and PGD Acc, we formulate the following conjecture.
Conjecture 2: We conjecture that NRF2 can be a cause of a significant PGD Acc drop.
Verification of Conjecture 2. To verify Conjecture 2, we finetune a robust model on a training dataset with and without such NRF2, respectively, and evaluate the PGD Acc on the original test set with true labels (Xtest, y). To minimize the influence of other NRF types, we adopt a model pretrained by PGD AT, which mainly has RFs, for the finetuning experiment. Specifically, we adopt the generated new training dataset (X + δ, yrand) at Step 2 of Fig. 2(c) as the one with NRF2. For the counterpart dataset without NRF2, we remove the added perturbation δ, and thus (X , yrand) is
used for training. The basic loss is set to cross-entropy (CE) to encourage learning the features, if any, in the generated dataset. However, the accuracy will quickly reduce to zero due to the random choice of yrand. Thus, a KL loss, which encourages the output of the finetuned model to be close to that the pretrained mode, is added on top of the CE loss to encourage the model in the finetuning process to maintain the original RFs. The total loss is shown as follows:
Lossfinetune = CE(f(x+ δ; θ), yrand) + λ ∗KL(f(x+ δ; θpretrain), f(x+ δ; θ)), (4)
where f(x; θpretrain) and f(x; θ) indicate the pretrained PGD model and finetuning model, respectively. A detailed setup for this experiment is reported in the Appendix. The results with λ set to 5 are shown in Fig. 5. We observe that the PGD Acc can be maintained around 25% after 30 epochs of finetuning for the dataset (X , yrand) which contains no features. By contrast, under the same setting, the PGD Acc quickly decreases to a value close to zero for the generated dataset (X + δ, yrand) which contains NRF2. The contrasting results verify the claim in Conjecture 2.
Additional results with other λ values in Equation 4 are report in Fig. 6. As λ gets larger, the model finetuned on (X , yrand) maintains more RFs learned in pretrained weights θpretrain, leading to an increase in accuracy. However, the PGD Acc for the model finetuned on (X + δ, yrand) (with NRF2) is zero for a wide range of λ values, which is much lower than the model finetuned on (X , yrand) (without NRF2). This further verifies the claim in Conjecture 2. Interestingly, the result in Fig. 6 can also be viewed as another proof for Conjeture 1. The Standard Acc, evaluated on the original test set (Xtest, y), is higher for the model finetuned on (X + δ, yrand) is higher than its counterpart on (X , yrand) for all λ in Fig. 6(b). This finding aligns with Conjecture 1 that there exists a type of NRF, which can be encouraged under FGSM attack.
Discussion on the drop speed of PGD Acc from the NRF2 perspective. As demonstrated in Section 4.1, CO can be perceived as a dynamic process of learning NRF2. With this insight, learning NRF2 in FGSM AT does not occur suddenly, which is supported by the finding that FGSM Acc still increases even after CO happens. If this is the case, how can we justify the sudden drop of PGD Acc within one epoch? At first sight, it seems that NRF2 can only explain the PGD Acc drop but not its drop speed. However, we argue that the sudden drop of PGD Acc is due to the worst-case property of PGD attack. Note that PGD attack seeks the most effective adversarial perturbation with multiple iterations to fool the model by exploiting the most vulnerable features in the model. In other words, the model is already vulnerable to PGD attack even if it only learns a small amount of NRF2 (one epoch regarding CO, for instance). After the PGD Acc drops to zero, the model continues to learn more NRF2, leading to a higher FGSM Acc.
5 NRF2 HELPS EXPLAIN HOW SOTA METHODS PREVENT CO
A recent work (Zhang et al., 2022) outperforms prior methods in FGSM AT by a large margin without additional computation overhead. Specifically, it shows that adding noise to the input (instead of initializing the adversarial perturbation with noise as in (Wong et al., 2020)) is critical for its success (Zhang et al., 2022). A similar finding has also been reported in another recent work (de Jorge
et al., 2022). However, why such a simple technique of adding noise on the images is so effective remains not fully clear. Here, we show that NRF2 sheds new light on their success.
Intuitively, the model tends to learn those features that are useful for prediction. Therefore, PGD AT mainly learns RFs because NRFs are not useful under PGD attack. With FGSM AT, the model is encouraged to learn NRF2 because FGSM attack increases its usefulness. Moreover, with our analysis in Section 4, CO can be seen as a dynamic process of learning NRF2. Therefore, the key to preventing CO in FGSM AT lies in decreasing the NRF2 usefulness under FGSM attack. Regarding why adding noise to the image input prevents CO, we establish the following hypothesis.
Conjecture 3: We conjecture that adding noise to the input decreases the usefulness of NRF2 under FGSM attack (indicated by FGSM Acc).
Verification of Conjecture 3. For facilitating the discussion, we divide all types of features into NRF2 and non-NRF2. A CO model has both NRF2 and non-NRF2, while a non-CO model mainly has non-NRF2. We evaluate the performance on a model without or with random noise added to the input and calculate the noise-induced change of Standard Acc and FGSM Acc (Table 4). Note that for FGSM Acc with noise, the noise is added to the input before the FGSM attack following (Zhang et al., 2022). For the model before CO, the noise has almost the same influence on the change of Standard Acc and FGSM Acc, i.e ▽SA of −0.70% (Standard Acc change) is close to
▽FGSM of −1.30%. We further conduct the same experiment on a CO model. Before adding noise, the FGSM Acc (94.67%) is higher than its standard Acc (85.77%), which can be attributed to NRF2 as in Conjecture 1. After adding noise, this trend is reversed (57.37% < 84.55%), suggesting Phenomenon 2 disappears in this setup. Moreover, ▽FGSM (−37.30%) is much more significant than ▽SA (−1.22%). Such a significant drop of FGSM Acc (▽FGSM ) on a CO model (with NRF2) suggests that the NRF2 usefulness under FGSM attack is significantly decreased. Fig. 7 visualizes ▽FGSM and ▽SA of different noise sizes, which shows the same trend with Table 4 that ▽FGSM of CO model is the most significant change among all settings. Therefore, Conjecture 3 is verified, which provides a new understanding on why input noise prevents CO.
More discussion on NRF2 explaining earlier attempts of mitigating CO. Even though we mainly apply our NRF2 to understanding the SOTA technique of input noise in recent works (Zhang et al., 2022; de Jorge et al., 2022), it also well justifies earlier successful attempts. For example, the success of random initialization in (Wong et al., 2020) is conceptually similar to adding the noise on the input but the noise magnitude is limited by the allowable perturbation size. (Kim et al., 2020) alleviates CO by limiting the step size, which aligns well with our finding in Table 3. (Li et al., 2020) avoids CO by switching to PGD AT after detecting the occurrence of CO, the success of which is expected since PGD attack can effectively discourage the model from learning NRF2.
6 CONCLUSION
The reason for CO in FGSM AT remains not fully clear despite various attempts to mitigate it. In contrast to prior works mainly studying PGD Acc drop to understand CO, our work focuses on another intriguing phenomenon that FGSM Acc is higher than Standard Acc. We have found that there exists NRF2 whose usefulness is decreased under FGSM attack and CO can be seen as a dynamic process of learning such a type of NRF. Our investigation has also provided a new understanding of successful attempts on how to mitigate CO in recent works.
A APPENDIX
Experimental setups for NRF categorization in Fig. 2. At Step 1, we follow the settings in (Andriushchenko & Flammarion, 2020), and train M1 on CIFAR10 for 30 epochs and cyclic learning rate with the maximum learning rate 0.3. Both attack radius sizes for training at Step 1 and perturbation generation at Step 2 are set as 8/255. Based on the new dataset (X + δ, yrand), M2 is trained for 30 epochs with a constant learning rate 0.015.
Experimental setups for the finetuning experiment in Section 4.2. The first two steps of the finetuning experiment follow the same settings of that in Fig. 2, generating a new dataset (X + δ, yrand). at Step 3 , we first follow the settings of PGD AT in (Andriushchenko & Flammarion, 2020) and train a robust Mpgd. Based on the new dataset (X + δ, yrand), M2 is trained by finetuning on Mpgd for 30 epochs with a constant learning rate 0.005. | 1. What is the focus of the paper regarding non-robust features and counterfactual explanations?
2. What are the strengths of the proposed approach, particularly in explaining phenomena in counterfactual attacks?
3. What are the weaknesses of the paper, especially regarding the verification of non-robust features?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper proposes a more fine-grained (direction based) classification of non-robust feature (NRF) to explain two phenomena of CO: 1) the PGD accuracy drops to a value close to zero when CO happens; 2) FGSM accuracy is higher than standard accuracy for a CO model. The authors think that there is an NRF with increasing usefulness after FGSM attack. Based on this, an explanation is provided for a simple technique for CO mitigating.
Strengths And Weaknesses
+Provides phenomenon 2 that has been overlooked before: FGSM accuracy is higher than standard accuracy for a CO model.
+Extend the perspective of non-robust feature to FGSM AT to explain CO.
-The core of this paper is the direction-based categorization of NRF, say, NRF1 and NRF2. But the verification of NRF2 is not convincing. Table 2 shows that NRF2 exists after CO, and not before CO, which is obviously contradictory. As far as I know, non-robust features are inherent properties of the data (derived from patterns in the data distribution [1]) that do not change with the training process. For example, the semantic features are RF and the background is NRF, and we cannot say that there is no NRF in the data after PGD AT training. Based on this point, the subsequent opinions and analysis seem to be self-justifying and not convincing.
[1] Ilyas, Andrew, et al. "Adversarial examples are not bugs, they are features." Advances in neural information processing systems 32 (2019)
Clarity, Quality, Novelty And Reproducibility
The introduction and related work are well written, and the ideas of the paper that investigates CO from the perspective of NRF is novel. |
ICLR | Title
Unbiased Representation of Electronic Health Records for Patient Outcome Prediction
Abstract
Fairness is one of the newly emerging focuses for building trustworthy artificial intelligence (AI) models. One of the reasons resulting in an unfair model is the algorithm bias towards different groups of samples. A biased model may benefit certain groups but disfavor others. As a result, leaving the fairness problem unresolved might have a significant negative impact, especially in the context of healthcare applications. Integrating both domain-specific and domain-invariant representations, we propose a masked triple attention transformer encoder (MTATE) to learn unbiased and fair data representations of different subpopulations. Specifically, MTATE includes multiple domain classifiers and uses three attention mechanisms to effectively learn the representations of diverse subpopulations. In the experiment on real-world healthcare data, MTATE performed the best among the compared models regarding overall performance and fairness.
1 INTRODUCTION
Electronic Health Record (EHR) based clinical risk prediction using temporal machine learning (ML) and deep learning (DL) models benefits clinicians for providing precise and timely interventions to high-risk patients and better-allocating hospital resources (Xiao et al., 2018; Shamout et al., 2020). Nevertheless, a long-standing issue that hinders ML and DL model deployment is the concern about model fairness (Gianfrancesco et al., 2018; Ahmad et al., 2020). Fairness in AI/DL refers to a model’s ability to make a prediction or decision without any bias against any individual or group (Mehrabi et al., 2021). The behaviors of a biased model often result in two facets: it performs significantly better in certain populations than the others (Parikh et al., 2019), and it makes inequities decisions towards different groups (Panch et al., 2019). Clinical decision-making based upon biased predictions may cause delayed treatment plans for patients in minority groups or misspend healthcare resources where treatment is unnecessary (Gerke et al., 2020).
The data distribution shift problem across different domains is one of the major reasons a model could be biased (Adragna et al., 2020). To address the fairness issue, domain adaptation methods have been developed. The main idea is to learn invariant hidden features across different domains, such that a model would perform similarly no matter to which domain the test cases belong. Pioneer domain adaptation models, including DANN (Ganin et al., 2016), VARADA (Purushotham et al., 2017), and VRNN (Chung et al., 2015), learn invariant hidden features by adding a domain classifier and using a gradient reversal layer to maximize the domain classifier’s loss. In return, the learned hidden features are indifferent across domains. Recent work MS-ADS (Khoshnevisan & Chi, 2021) has shown robust performance across minority racial groups by maximizing the distance between the globally-shared presentations with individual local representations of every domain, which effectively consolidates the invariant globally-shared representations across domains. However, it is difficult to align large domain shifts and model complex domain shifts across multiple overlapped domains.
Alternatively, the data distribution shift problem could be addressed using domain-specific bias correction approaches. A recent study showed that features strongly associated with the outcome of interest could be subpopulation-specific (Chouldechova & Roth, 2018). It indicates that lumping together all features from patients with different backgrounds might bury unique domain-specific
characteristics. Afrose et al. proposed to use double prioritized bias correction to train multiple candidate models for different demographic groups (Afrose et al., 2022). Similarly, AC-TPC Lee & Van Der Schaar (2020) and CAMELOT Aguiar et al. (2022) used clustering algorithms to generate representations of patients with similar backgrounds and use cluster-specific representations for the outcome prediction.
In summary, both domain adaptation and domain-specific bias correction approaches address the same fairness issue with different assumptions about the relationships between latent representation and the prediction outcome. The former believes that performance variation across domains would be benefited from invariant feature representation, while the latter affirms domain-specific representations. It remains unclear whether domain-invariant and domain-specific data representation should be used for a prediction task.
To better address the fairness issue, we propose an adaptive multi-task learning algorithm, called MTATE (i.e. Masked Triple Attention Transformer Encoder), to automatically learn and select the optimal and fair data representations instead of explicitly choosing domain adaptation or domainspecific bias correction. Under this setting, both invariant and domain-specific representations are special cases where one of the approaches dominates the data representation. The purpose of MTATE is to generate multiple masked representations of the same data that are attended by both time-wise attention and multiple feature-wise attentions in parallel, where each masked representation corresponds to a specific domain classification task. For example, one of the domain classifiers breaks the patient cohort into subpopulations defined by race, and another classifier is focused on gender. The learned EHR representations could be domain-specific, domain-invariant, or the mix of the two reflected by the domain classification loss values. A low loss value indicates the representation is domain-specific, and a high loss value indicates domain-invariant. The model will compute the representation-wise attention for each individual testing case, leading to personalized data representation for downstream predictive tasks. The overall framework of MTATE is shown in Figure 1. The primary goal of MTATE is to learn an unbiased representation to make fair and precise patient outcome predictions in a real-world healthcare setting.
To demonstrate the effectiveness of MTATE, we focus on rolling mortality predictions for patients with Acute Kidney Injury requiring Dialysis (AKI-D), a severe complication for critically ill patients, with a high in-hospital mortality rate (Lee & Son, 2020). The clinical risk classification for AKI-D patients is challenging due to complex subphenotypes and treatment exposures (Neyra & Nadkarni, 2021; Vaara et al., 2022). There is an urgent need to develop actionable approaches to account for patients’ backgrounds and subpopulations for personalized medicine and improve patients-centered outcomes (Chang et al., 2022).
The contributions of this work are three-fold: 1) To the best of our knowledge, MTATE is the first model that integrates both domain-specific and domain-invariant features in one model. It trains the fair representation and predicts the downstream task altogether, which is critical for reliable clinical outcome predictions for patients with different demographics and clinical backgrounds; 2) MTATE employs time-wise, feature-wise, and representation-wise attention mechanisms to compose data representations for downstream prediction tasks dynamically; and 3) MTATE effectively mitigated the bias towards different subpopulations in the rolling mortality prediction tasks on AKI-D patients and achieved the best performance compared to baselines.
2 METHOD
MTATE consists of five components, and the detailed architecture is shown in Figure 2. The first component is a temporal-relevance attention (TR-Attention) module to generate time-wise attention that associates each time step with the other time steps considering all features. The output is a time-attended representation. The second component is a domain-specific feature-relevance attention module to generate feature-wise attentions that associate each feature with the other features considering all the time steps. The outputs are multiple feature-attended representations, one for each domain. The third component is a module consists multiple domain classifiers, and each classifier classifies each feature-attended representation into a predefined domain. The fourth module is a unified data representation module, which uses the representation-wise attention to aggregate feature-attended representations (domain-invariant or domain-specific) to a final representation. The last module is an outcome prediction module where the final representation is used for patient outcome prediction.
2.1 NOTATIONS
A patient’s EHR data can be represented as X = {x1,x2, ...,xt, ...,xNt}, X ∈ RNt×Nf , where Nf is the number of features and Nt is the number of time steps. xt ∈ R1×Nf represents a vector of clinical parameters (e.g., heart rate, blood pressure, etc.) at time step t. We consider a binary outcome and domain classification problem in this study. The patient domain class labels are denoted as dy ∈ RNd , where Nd represents the total number of domains, dyi ∈ {0, 1} represents the label for the i-th domain, 1 and 0 represent whether a given patient falls in the target domain or not, respectively. The patient outcome label is denoted as y ∈ {0, 1}, where 1 and 0 represent death and alive before hospital discharge.
2.2 TEMPORAL RELEVANCE ATTENTION
The temporal-relevance attention (TR-Attention) module aims to learn the relationships between each time step to other time steps considering all features at each time step. We first use the position
encoding from the original Transformer to encode the relative position information to the input X and use the multi-head attention mechanism to learn the temporal-relevance attention.
Specifically, query, key, value vectors (Q,K,V) are the linear projections of all features at every time steps X. Thus, the attention weights computed from the query and key represent how much focus all features at one single time step is associated with themselves at other time steps. Then, the output of each head Zh is the multiplication of value vectors and time-wise attention ATR. The final output of the TR-Attention module Z ∈ RNt×Nf is the linear transformation of the concatenation of the output of every head. Lastly, the residual connection and and layer normalization are applied to Z, denoted as Z = LayerNorm(Z + X). The temporal-relevance attention of each head is represented as:
Q,K,V = XWQ,XWK,XWV (1)
ATR = softmax( QKT√
dk ) (2)
Zh = ATRV (3)
Z = Concat(Zh1 , ..., Z h i , ..., Z h Nh )WO (4)
For simplicity, we assume all projection matrices WQ,WK,WV have the same dimension dk. Thus, WQ,WK,WV ∈ RNf×dk , Q,K,V ∈ RNt×dk , the temporal relevance attention is ATR ∈ RNt×Nt , the output of each head is Zh ∈ RNt×dk . The projection matrix for the final output is WO ∈ R(Nhdk)×Nf , where Nh represents the number of heads.
2.3 DOMAIN-SPECIFIC FEATURE RELEVANCE ATTENTION
The domain-specific feature relevance attention module aims to learn each domain’s diverse and unique latent representation. The module includes a set of parallel sub-modules called featurerelevance attention (FR-Attention), where each FR-Attention module focuses on the representation of one specific domain. Since features for the different domains are not equally important, we randomly masked out some number of latent features along all time steps differently for each FRAttention module. In return, the masking procedure forces each sub-module to learn different feature focuses for each domain to generate unique domain representations. Then feature-wise attention is computed using the multi-head attention similar to the TR-Attention model. FR-Attention module learns the relationships between each latent feature and the others considering all time steps.
The input of each FR-Attention sub-module is ZT ∈ RNf×Nt , which is the transposed output of the TR-Attention module Z. Then, ZT is passed through a masking layer, in which MR×Nf number of latent features are randomly selected and removed from ZT , where MR represents the masking rate. We denote the masked input of each sub-module as M ∈ RNl×Nt , where Nl represents the number of features after masking. M is passed through the multi-head attention block as well as the residual connection and layer normalization. Finally, M is transposed back to the original form. Then it is passed through a point-wise feed-forward network (FFN) as well as the residual connection and layer normalization to get the final output, denoted as Z′ ∈ RNt×Nl . Please see the detailed architecture and formula for FR-Attention in Appendix Section A.1.
2.4 DOMAIN CLASSIFIER
Multiple domain classifiers are used to classify patients into subpopulations based on the latent representation from FR-Attention sub-module. The input of one domain classifier is Z′i ∈ RNt×Nl , where i denotes the index of the sub-module or domain. Z′i is flattened by taking the max along the time-dimension, then followed by a linear layer with a sigmoid function for binary classification. We use binary cross-entropy for the domain classification loss, denoted as Ldi . The domain classifier module assists to learn latent representation for each domain and the domain classification loss is used for generating representation-wise attention in the next module.
2.5 DOMAIN-FOCUSED REPRESENTATION AND REPRESENTATION-WISE ATTENTION
While a domain-specific representation module is focused on representing its target domain, the resulting representation from each domain can be domain-specific or domain-invariant according to the domain loss. However, not all representations are equally crucial to outcome prediction. Thus, this module aims to generate the final representation for the outcome prediction considering both domain-specific and domain-invariant representations and their corresponding domain loss. We call this module a domain-focused representation module since both domain-specific and domaininvariant are candidate representations.
The input to this module is a transformed version of the latent representation generated from each FR-Attention module Z′i. Every Z ′ i gets transformed to its original dimension by adding the masked(removed) feature back with values of 0 so that all latent representations can be aligned in feature space. We denoted this particular form of latent representation as Zoi . Z o i is flattened by taking the max along the time dimension, and all flattened vectors are concatenated together, denoted as E ∈ RNd×Nf . The final representation C ∈ RNf×1 is the weighted sum of all the candidate representations, where the weights a ∈ RNd×1 are the representation-wise attention (RW-Attention), which is computed based on E and the domain prediction loss Ld ∈ RNd×1 as following:
a = softmax(tanh(Concat(E,Ld)UA)WA) (5)
Cj = Nd∑ i=1 aiEi,j (6)
where UA ∈ R(Nf+1)×da and WA ∈ Rda×1 are the projection matrices, i represents the domain index, j represents the feature index.
2.6 PATIENT OUTCOME PREDICTION
The final representation C of EHR is concatenated with all static features, such as demographics and comorbidity, followed by a linear layer with a sigmoid function for the outcome binary prediction. Let the patient outcome label be y and the predicted label be ŷ, and we use the binary cross entropy as the part of the final loss, denoted as Lp. We also constructed the supervised contrastive loss Khosla et al. (2020) to mitigate further the model bias as another part of the final loss. The contrastive loss is denoted as Lc. The final prediction loss L is:
L = Lp + Lc (7)
Lp = Ns∑ i=1 −(yi log(ŷi) + (1− yi) log(1− ŷi)) (8)
Lc = Ns∑ j=1 −1 Np Np∑ p=1 log exp(hj ∗ hp/τ)∑Na a=1 exp(hj ∗ ha/τ)
(9)
where Ns,Np, Na represents the number of all samples, the number of samples having the same labels as the anchor samples (j), and the number of samples having the opposite label to the anchor samples (j). h represents the concatenation of the learned representation C and the static features. τ is a scale parameter.
3 EXPERIMENTS SETTINGS
In the experiment, we aim to continuously predict AKI-D patients’ mortality risk in their dialysis/renal replacement therapy (RRT) duration. More specifically, given a period of EHR in dialysis duration before time T , we will continuously predict the mortality risk between T and T + 72h.
3.1 EXPERIMENT DATA
The study population consists of 570 AKI-D adult patients admitted to ICU at the University Hospital from January 2009 to October 2019. Among them, 237 (41.6 %) died before discharge, and
333 (58.4%) survived. Patients are excluded if they were diagnosed with end-stage kidney disease (ESKD) before or at the time of hospital admission, are recipients of a kidney transplant, or has RRT less than 72 or greater than 2,000 hours.
Data features include 12 temporal features (systolic blood pressure, diastolic blood pressure, serum creatinine, bicarbonate, hematocrit, potassium, bilirubin, sodium, temperature, white blood cells (WBC) count, heart rate, and respiratory rate) and 11 static features including demographics and comorbidities (age, race, gender, admission weight, body mass index (BMI), Charlson comorbidity score, diabetes, hypertension, cardiovascular disease, Chronic Kidney Disease, and Sepsis). All outliers (> 97.5% or < 2.5%) were excluded and missing values are imputed with the last observation carried forward (LOCF) method.
To continuously predict mortality risks, we generate 30 samples from each patient’s EHR data with random start and end times as long as the duration is greater than 10 time steps. The class label of a sample is whether the patient died (positive) or survived (negative) in the next 72 hours. From 570 AKI-D patients, 13,333 EHR samples are extracted, including 2,975 positive and 10,358 negative samples. As shown in Table 1, all the EHR samples are split into train (75%), validation (5%), and test data (20%) patient-wise, which ensures that samples from the same patients only appear in one of the three sets. Eighteen subpopulations were considered in this study based on nine domains according to patient demographics (i.e., age, gender, race) and commodities (i.e., Charlson score, diabetes, hypertension, cardiovascular disease, chronic kidney disease, and sepsis).
3.2 BASELINE METHOD AND FAIRNESS PERFORMANCE METRICS
We compared MTATE with two widely used and accessible sequence DL methods, LSTM and Transformer, one well-known EHR-specific representations method, RETAIN, and one pioneer domain-adaptation method, DANN*. For Transformer, the encoder part of the original Transformer is used. For DANN*, we only use the gradient reversal layer from the original DANN to get domaininvariant representation with all other structures the same as MTATE. For all models, the input data are the EHR temporal features, and static features are concatenated with latent representation before the prediction layer, as described in Section 2.6. We evaluate the performance of all models using traditional performance metrics: Area under the ROC Curve (ROCAUC), Accuracy(ACC), Area under the Precision-Recall Curve (PRAUC) as well as three fairness metrics Demographic Parity Difference (DPD), Equality of Opportunity Difference (EOD) and Equalized Odds Difference (EQOD) (Feldman et al., 2015; Hardt et al., 2016; Afrose et al., 2022) (see fairness metric equations 15, 16, 17 in Appendix). We compare all models on both imbalanced and balanced sets. The positive samples (died) and negative samples (survived) are 1:4 and 1:1, respectively.
4 RESULTS AND DISCUSSION
4.1 COMPARISON WITH BASELINE METHODS
The overall performance of rolling mortality prediction in the next 72 hours for all test data with an imbalanced positive to negative ratio are shown in Table 2. We also show the performance on the balanced data in Table A1. Regarding the imbalanced test data, Table 2 shows that MTATE outperforms all the compared baseline methods in almost all metrics. LSTM is the most competitive method since it has the same highest ROCAUC as MTATE and the highest PRAUC. Nevertheless, the fairness scores of LSTM are not as good as MTATE. RETAIN, and Transformer have a moderate
performance. The performance of all the compared algorithms on the balanced test data shows that MTATE has the best ROCAUC, DPD, EOD, and EQOD, and second-best accuracy (see Table A1).
We compare MTATE with all baseline methods within each subpopulation domain. Figure 3 shows that MTATE has the best (lowest) averaged EQOD score. We also compare MTATE with all baselines regarding the difference of PRAUC within each subpopulation domain (e.g., the difference of PRAUC between female and male). The difference in PRAUC for each domain and the averaged score across all domains is in Figure A2. It shows that MTATE has the lowest percentage difference in PRAUC between subpopulations in Age, Gender, and Hypertension domains.
4.2 ABLATION STUDY
We conduct an ablation study to test how each component of MTATE performs by removing them from MTATE. Table 3 shows that MTATE has the best performance for almost all metrics except for ROCAUC, which is the second best to MTATE without masking. However, the accuracy of MTATE without masking is 18% lower than MTATE. In addition, RW-ATT is the most effective component since the performance drops the most in the two ablations that are without RW-ATT.
A primary goal of MTATE is to learn fair representations that can be used by a wide spectrum of downstream predictive models. To this end, we test whether the representation learned from MTATE can be used by traditional machine learning methods. The last three lines in Table 3 shows that all three traditional methods, XGboost, SVM, and Random Forest, have achieved similar performance as MTATE and are better than some of the compared deep learning methods. This comparison confirms that MTATE can serve as a pre-trained EHR data representation generator, and the learned representations can be used by downstream prediction tasks implemented with different classifiers.
4.3 EFFECTIVENESS ASSESSMENT OF RW-ATTENTION
Since RW-Attention is the most effective component in MTATE, we study its relationships with the outcome prediction loss Lp and domain loss Ld, as shown in Figure 4. Each dot in the figure presents the averaged value of all samples from the same subpopulation. The figure shows three example domains in two facets. First, the correlation between the outcome prediction loss and the domain loss could be negatively correlated (Age), positive (CVD), or mixed (Race). In the negative correlation scenario, the higher the domain loss, the lower the outcome prediction loss, which suggests the RW-Attention is putting more weight on the representations with larger domain loss (i.e., domain-invariant representations). In the positively correlated scenario, outcome prediction loss decreases with the decrease of domain loss. It suggests the RW-Attention is putting more weight on the representations with smaller domain loss (domain-specific representations). Second, attention weights, as indicated by the color of the dots in the figure, demonstrate the relationship between RWAttention and the outcome prediction loss. The darker colored dots (greater attention) are almost always related to the lower outcome prediction loss, whether the outcome prediction loss and the domain loss are positively or negatively correlated. This indicates that the RW-Attention module can weigh both domain-specific or domain-invariant representations toward lower outcome prediction loss. Similar patterns in all other domains are in Figure A4.
5 CONCLUSION
In this work, we presented MTATE, an attention-based encoder for EHR data. MTATE uses three different attention mechanism (time-relevance, feature-relevance, and representation-wise) to learn unbiased data representations. Experiments on real-world healthcare data demonstrated that MTATE outperforms the compared baseline methods on continuous mortality risk prediction for critically ill AKI-D patients regarding fairness.
A APPENDIX
A.1 FR-ATTENTION
The FR-Attention sub-module for each head is computed as:
Q′,K′,V′ = MUQ,MUK,MUV (10)
AFR = softmax( Q′K ′T√
d′k ) (11)
M′h = AFRV ′ (12)
M′ = Concat(M ′h1 , ...,M ′h i , ...,M ′h N ′h )UO (13)
Z′ = max(0, (M′)TW1 + b1)W2 + b2 (14)
Similar to TR-Attention module, we assume all projection matrix UQ,UK,UV have the same dimension d′k. Thus, UQ,UK,UV ∈ RNt×d ′ k and Q′,K′,V′ ∈ RNl×d′k . The feature-relevance attention AFR ∈ RNl×Nl . The output of each head is M′h ∈ RNl×d′k . Similar to the TR-Attention module, all outputs from all heads are concatenated to form M′ ∈ RNl×Nt , and linear transformation are applied with the projection matrix UO ∈ R(N ′ hd ′ k)×Nt , where N ′h represents the number of head.
Figure A1: A. The structure of FR-Attention module in MTATE. B. The multi-head attention module from the original Transformer used in MTATE.
A.2 FAIRNESS METRICS
Demographic parity suggests that a predictor is unbiased if the prediction is independent of the protected attribute (e.g., Age, Gender, and etc.). We denote protected attribute as A ∈ a, b, and A only take two groups a, b (e.g., Young vs Old for Age) for simplicity. Thus, the Demographic parity difference (DPD) is the difference between the two group a and b The formula for Demographic parity difference (DPD) is shown in below:
DPD = P (ŷ = 1|A = a)− P (ŷ = 1|A = b) (15)
Equality of opportunity suggests that a predictor is unbiased if the true-positive rate between two groups are equal. Similarly, the Equality of opportunity difference (EOD) is the difference between the two group a and b. The formula for EOD is shown in below:
EOD = P (ŷ = 1|y = 1, A = a)− P (ŷ = 1|y = 1, A = b) (16)
Equalized odds suggests that a predictor is unbiased if both the true-positive rate (TPR) and falsepositive rate (FPR) between two groups are equal. We computes the Equalized odds Difference (EQOD) as the average of the difference in both TPR and FPR. The formula for EQOD is shown in below:
EQOD = (TPRD + FPRD)/2 (17) TPRD = P (ŷ = 1|y = 1, A = a)− P (ŷ = 1|y = 1, A = b) (18) FPRD = P (ŷ = 1|y = 0, A = a)− P (ŷ = 1|y = 0, A = b) (19)
A.3 PERFORMANCE COMPARISONS
A.3.1 OVERALL PERFORMANCE
The following table shows the performance comparison between MTATE and baseline methods for the balanced test data.
Table A1: Balanced performance of MTATE and compared algorithms for the mortality prediction in the next 72 hours (pos:neg = 1:1).
Method ROCAUC ACC PRAUC DPD EOD EQOD
Transformer 0.70(0.09) 0.67(0.09) 0.69(0.11) 0.17(0.13) 0.15(0.12) 0.10(0.06) LSTM 0.71(0.11) 0.67(0.11) 0.76(0.12) 0.19(0.12) 0.20(0.12) 0.12(0.05) RETAIN 0.68(0.12) 0.67(0.11) 0.72(0.13) 0.22(0.13) 0.21(0.12) 0.12(0.06) DANN* 0.58(0.12) 0.54(0.14) 0.59(0.12) 0.21(0.19) 0.16(0.13) 0.12(0.08) MTATE (Ours) 0.73(0.09) 0.65(0.11) 0.75(0.11) 0.15(0.10) 0.14(0.11) 0.08(0.05)
A.3.2 FAIRNESS PERFORMANCE WITHIN EACH DOMAIN
Figure A2: The comparison between MTATE with baseline methods for the percentage difference score in PRAUC for each domain. Y-axis represents the percentage difference. X-axis represents the subpopulation domain, each domain consists of two subpopulations (e.g., Young (< 65 y/o) vs. Old in Age domain, Sepsis vs. Non-Sepsis in Sepsis Domain ). CCI stands for charlson comorbidity score, DB stands for diabetes, HT stands for hypertension, CVD stands for cardiovascular disease, CKD stands for chronic kidney disease.
A.4 STUDY OF MASKING RATE
We analysed the effect on the performance by different masking ratio. Figure A3 shows the performance metrics of ROCAUC, PRAUC and accuracy with respect to the masking ratio from 0 to 0.9. Both ROCAUC and PRAUC have a similar trend, both starting with a relative high score and gradually decrease with some punctuation, and both reach lowest scores at masking rate 0.8 and 0.9. In contrast, the accuracy has a opposite trend, where it starts with the lowest score, and almost monotonically increasing. The masking rate is highly dependant on the input data, thus the chosen of masking rate is not universal. For our experimental data, one of the best masking rate is 0.4, where it has the highest accuracy and PRAUC and the third best ROCAUC.
Figure A3: Performance Score of ROCAUC, PRAUC and Accuracy with different masking rate
A.5 RELATIONSHIP BETWEEN OUTCOME LOSS, DOMAIN LOSS AND REPRESENTATION-WISE ATTENTION
Figure A4: Relationship between outcome loss, domain loss and representation-wise attention in all domains. Y-axis represents the outcome loss, x-axis represents the domain loss. The colored dots represent the representation-wise attention, and darker color represents higher attention. | 1. What is the focus and contribution of the paper regarding learning representations?
2. What are the strengths and weaknesses of the proposed approach, particularly in its ability to address sensitive groups?
3. Do you have any concerns regarding the terminology used in the paper, such as the use of "domain" instead of "sensitive or protected groups"?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any missing baselines or comparisons with other works that should be included in the paper? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The authors propose a method to learn representation that is invariant to sensitive groups (or domains as claimed by the authors). This is achieved through a network structure that has a transformer to generate time-wise attention, domain-specific feature-wise attention (to extract features that are specific to each domain/subgroup), and finally all these representations are concatenated and passed to a network to generate a universal representation that can be used for downstream tasks.
Strengths And Weaknesses
Strength:
The problem is very important especially in health care domain.
Weakness:
The paper is not easy to follow
I think using the word "domain" might by non-intuitive in this context. It is usually referred to as sensitive or protected groups.
The method seems to be adhoc without a principle way to motivate the structure of the network.
It is claimed in the paper that "To the best of our knowledge, MTATE is the first model that automatically trains and determines optimal and fair data representations." There are many papers about learning fair representation. The one that needs to be highlighted here is "Creager, Elliot, et al. "Flexibly fair representation learning by disentanglement." International conference on machine learning. PMLR, 2019." The method in that paper learns representation that is invariant to the sensitive groups, which is the main objective of the current paper.
Missing important baselines to compare to.
Clarity, Quality, Novelty And Reproducibility
Not easy to follow. Method novelty is minor and very adhoc. |
ICLR | Title
Unbiased Representation of Electronic Health Records for Patient Outcome Prediction
Abstract
Fairness is one of the newly emerging focuses for building trustworthy artificial intelligence (AI) models. One of the reasons resulting in an unfair model is the algorithm bias towards different groups of samples. A biased model may benefit certain groups but disfavor others. As a result, leaving the fairness problem unresolved might have a significant negative impact, especially in the context of healthcare applications. Integrating both domain-specific and domain-invariant representations, we propose a masked triple attention transformer encoder (MTATE) to learn unbiased and fair data representations of different subpopulations. Specifically, MTATE includes multiple domain classifiers and uses three attention mechanisms to effectively learn the representations of diverse subpopulations. In the experiment on real-world healthcare data, MTATE performed the best among the compared models regarding overall performance and fairness.
1 INTRODUCTION
Electronic Health Record (EHR) based clinical risk prediction using temporal machine learning (ML) and deep learning (DL) models benefits clinicians for providing precise and timely interventions to high-risk patients and better-allocating hospital resources (Xiao et al., 2018; Shamout et al., 2020). Nevertheless, a long-standing issue that hinders ML and DL model deployment is the concern about model fairness (Gianfrancesco et al., 2018; Ahmad et al., 2020). Fairness in AI/DL refers to a model’s ability to make a prediction or decision without any bias against any individual or group (Mehrabi et al., 2021). The behaviors of a biased model often result in two facets: it performs significantly better in certain populations than the others (Parikh et al., 2019), and it makes inequities decisions towards different groups (Panch et al., 2019). Clinical decision-making based upon biased predictions may cause delayed treatment plans for patients in minority groups or misspend healthcare resources where treatment is unnecessary (Gerke et al., 2020).
The data distribution shift problem across different domains is one of the major reasons a model could be biased (Adragna et al., 2020). To address the fairness issue, domain adaptation methods have been developed. The main idea is to learn invariant hidden features across different domains, such that a model would perform similarly no matter to which domain the test cases belong. Pioneer domain adaptation models, including DANN (Ganin et al., 2016), VARADA (Purushotham et al., 2017), and VRNN (Chung et al., 2015), learn invariant hidden features by adding a domain classifier and using a gradient reversal layer to maximize the domain classifier’s loss. In return, the learned hidden features are indifferent across domains. Recent work MS-ADS (Khoshnevisan & Chi, 2021) has shown robust performance across minority racial groups by maximizing the distance between the globally-shared presentations with individual local representations of every domain, which effectively consolidates the invariant globally-shared representations across domains. However, it is difficult to align large domain shifts and model complex domain shifts across multiple overlapped domains.
Alternatively, the data distribution shift problem could be addressed using domain-specific bias correction approaches. A recent study showed that features strongly associated with the outcome of interest could be subpopulation-specific (Chouldechova & Roth, 2018). It indicates that lumping together all features from patients with different backgrounds might bury unique domain-specific
characteristics. Afrose et al. proposed to use double prioritized bias correction to train multiple candidate models for different demographic groups (Afrose et al., 2022). Similarly, AC-TPC Lee & Van Der Schaar (2020) and CAMELOT Aguiar et al. (2022) used clustering algorithms to generate representations of patients with similar backgrounds and use cluster-specific representations for the outcome prediction.
In summary, both domain adaptation and domain-specific bias correction approaches address the same fairness issue with different assumptions about the relationships between latent representation and the prediction outcome. The former believes that performance variation across domains would be benefited from invariant feature representation, while the latter affirms domain-specific representations. It remains unclear whether domain-invariant and domain-specific data representation should be used for a prediction task.
To better address the fairness issue, we propose an adaptive multi-task learning algorithm, called MTATE (i.e. Masked Triple Attention Transformer Encoder), to automatically learn and select the optimal and fair data representations instead of explicitly choosing domain adaptation or domainspecific bias correction. Under this setting, both invariant and domain-specific representations are special cases where one of the approaches dominates the data representation. The purpose of MTATE is to generate multiple masked representations of the same data that are attended by both time-wise attention and multiple feature-wise attentions in parallel, where each masked representation corresponds to a specific domain classification task. For example, one of the domain classifiers breaks the patient cohort into subpopulations defined by race, and another classifier is focused on gender. The learned EHR representations could be domain-specific, domain-invariant, or the mix of the two reflected by the domain classification loss values. A low loss value indicates the representation is domain-specific, and a high loss value indicates domain-invariant. The model will compute the representation-wise attention for each individual testing case, leading to personalized data representation for downstream predictive tasks. The overall framework of MTATE is shown in Figure 1. The primary goal of MTATE is to learn an unbiased representation to make fair and precise patient outcome predictions in a real-world healthcare setting.
To demonstrate the effectiveness of MTATE, we focus on rolling mortality predictions for patients with Acute Kidney Injury requiring Dialysis (AKI-D), a severe complication for critically ill patients, with a high in-hospital mortality rate (Lee & Son, 2020). The clinical risk classification for AKI-D patients is challenging due to complex subphenotypes and treatment exposures (Neyra & Nadkarni, 2021; Vaara et al., 2022). There is an urgent need to develop actionable approaches to account for patients’ backgrounds and subpopulations for personalized medicine and improve patients-centered outcomes (Chang et al., 2022).
The contributions of this work are three-fold: 1) To the best of our knowledge, MTATE is the first model that integrates both domain-specific and domain-invariant features in one model. It trains the fair representation and predicts the downstream task altogether, which is critical for reliable clinical outcome predictions for patients with different demographics and clinical backgrounds; 2) MTATE employs time-wise, feature-wise, and representation-wise attention mechanisms to compose data representations for downstream prediction tasks dynamically; and 3) MTATE effectively mitigated the bias towards different subpopulations in the rolling mortality prediction tasks on AKI-D patients and achieved the best performance compared to baselines.
2 METHOD
MTATE consists of five components, and the detailed architecture is shown in Figure 2. The first component is a temporal-relevance attention (TR-Attention) module to generate time-wise attention that associates each time step with the other time steps considering all features. The output is a time-attended representation. The second component is a domain-specific feature-relevance attention module to generate feature-wise attentions that associate each feature with the other features considering all the time steps. The outputs are multiple feature-attended representations, one for each domain. The third component is a module consists multiple domain classifiers, and each classifier classifies each feature-attended representation into a predefined domain. The fourth module is a unified data representation module, which uses the representation-wise attention to aggregate feature-attended representations (domain-invariant or domain-specific) to a final representation. The last module is an outcome prediction module where the final representation is used for patient outcome prediction.
2.1 NOTATIONS
A patient’s EHR data can be represented as X = {x1,x2, ...,xt, ...,xNt}, X ∈ RNt×Nf , where Nf is the number of features and Nt is the number of time steps. xt ∈ R1×Nf represents a vector of clinical parameters (e.g., heart rate, blood pressure, etc.) at time step t. We consider a binary outcome and domain classification problem in this study. The patient domain class labels are denoted as dy ∈ RNd , where Nd represents the total number of domains, dyi ∈ {0, 1} represents the label for the i-th domain, 1 and 0 represent whether a given patient falls in the target domain or not, respectively. The patient outcome label is denoted as y ∈ {0, 1}, where 1 and 0 represent death and alive before hospital discharge.
2.2 TEMPORAL RELEVANCE ATTENTION
The temporal-relevance attention (TR-Attention) module aims to learn the relationships between each time step to other time steps considering all features at each time step. We first use the position
encoding from the original Transformer to encode the relative position information to the input X and use the multi-head attention mechanism to learn the temporal-relevance attention.
Specifically, query, key, value vectors (Q,K,V) are the linear projections of all features at every time steps X. Thus, the attention weights computed from the query and key represent how much focus all features at one single time step is associated with themselves at other time steps. Then, the output of each head Zh is the multiplication of value vectors and time-wise attention ATR. The final output of the TR-Attention module Z ∈ RNt×Nf is the linear transformation of the concatenation of the output of every head. Lastly, the residual connection and and layer normalization are applied to Z, denoted as Z = LayerNorm(Z + X). The temporal-relevance attention of each head is represented as:
Q,K,V = XWQ,XWK,XWV (1)
ATR = softmax( QKT√
dk ) (2)
Zh = ATRV (3)
Z = Concat(Zh1 , ..., Z h i , ..., Z h Nh )WO (4)
For simplicity, we assume all projection matrices WQ,WK,WV have the same dimension dk. Thus, WQ,WK,WV ∈ RNf×dk , Q,K,V ∈ RNt×dk , the temporal relevance attention is ATR ∈ RNt×Nt , the output of each head is Zh ∈ RNt×dk . The projection matrix for the final output is WO ∈ R(Nhdk)×Nf , where Nh represents the number of heads.
2.3 DOMAIN-SPECIFIC FEATURE RELEVANCE ATTENTION
The domain-specific feature relevance attention module aims to learn each domain’s diverse and unique latent representation. The module includes a set of parallel sub-modules called featurerelevance attention (FR-Attention), where each FR-Attention module focuses on the representation of one specific domain. Since features for the different domains are not equally important, we randomly masked out some number of latent features along all time steps differently for each FRAttention module. In return, the masking procedure forces each sub-module to learn different feature focuses for each domain to generate unique domain representations. Then feature-wise attention is computed using the multi-head attention similar to the TR-Attention model. FR-Attention module learns the relationships between each latent feature and the others considering all time steps.
The input of each FR-Attention sub-module is ZT ∈ RNf×Nt , which is the transposed output of the TR-Attention module Z. Then, ZT is passed through a masking layer, in which MR×Nf number of latent features are randomly selected and removed from ZT , where MR represents the masking rate. We denote the masked input of each sub-module as M ∈ RNl×Nt , where Nl represents the number of features after masking. M is passed through the multi-head attention block as well as the residual connection and layer normalization. Finally, M is transposed back to the original form. Then it is passed through a point-wise feed-forward network (FFN) as well as the residual connection and layer normalization to get the final output, denoted as Z′ ∈ RNt×Nl . Please see the detailed architecture and formula for FR-Attention in Appendix Section A.1.
2.4 DOMAIN CLASSIFIER
Multiple domain classifiers are used to classify patients into subpopulations based on the latent representation from FR-Attention sub-module. The input of one domain classifier is Z′i ∈ RNt×Nl , where i denotes the index of the sub-module or domain. Z′i is flattened by taking the max along the time-dimension, then followed by a linear layer with a sigmoid function for binary classification. We use binary cross-entropy for the domain classification loss, denoted as Ldi . The domain classifier module assists to learn latent representation for each domain and the domain classification loss is used for generating representation-wise attention in the next module.
2.5 DOMAIN-FOCUSED REPRESENTATION AND REPRESENTATION-WISE ATTENTION
While a domain-specific representation module is focused on representing its target domain, the resulting representation from each domain can be domain-specific or domain-invariant according to the domain loss. However, not all representations are equally crucial to outcome prediction. Thus, this module aims to generate the final representation for the outcome prediction considering both domain-specific and domain-invariant representations and their corresponding domain loss. We call this module a domain-focused representation module since both domain-specific and domaininvariant are candidate representations.
The input to this module is a transformed version of the latent representation generated from each FR-Attention module Z′i. Every Z ′ i gets transformed to its original dimension by adding the masked(removed) feature back with values of 0 so that all latent representations can be aligned in feature space. We denoted this particular form of latent representation as Zoi . Z o i is flattened by taking the max along the time dimension, and all flattened vectors are concatenated together, denoted as E ∈ RNd×Nf . The final representation C ∈ RNf×1 is the weighted sum of all the candidate representations, where the weights a ∈ RNd×1 are the representation-wise attention (RW-Attention), which is computed based on E and the domain prediction loss Ld ∈ RNd×1 as following:
a = softmax(tanh(Concat(E,Ld)UA)WA) (5)
Cj = Nd∑ i=1 aiEi,j (6)
where UA ∈ R(Nf+1)×da and WA ∈ Rda×1 are the projection matrices, i represents the domain index, j represents the feature index.
2.6 PATIENT OUTCOME PREDICTION
The final representation C of EHR is concatenated with all static features, such as demographics and comorbidity, followed by a linear layer with a sigmoid function for the outcome binary prediction. Let the patient outcome label be y and the predicted label be ŷ, and we use the binary cross entropy as the part of the final loss, denoted as Lp. We also constructed the supervised contrastive loss Khosla et al. (2020) to mitigate further the model bias as another part of the final loss. The contrastive loss is denoted as Lc. The final prediction loss L is:
L = Lp + Lc (7)
Lp = Ns∑ i=1 −(yi log(ŷi) + (1− yi) log(1− ŷi)) (8)
Lc = Ns∑ j=1 −1 Np Np∑ p=1 log exp(hj ∗ hp/τ)∑Na a=1 exp(hj ∗ ha/τ)
(9)
where Ns,Np, Na represents the number of all samples, the number of samples having the same labels as the anchor samples (j), and the number of samples having the opposite label to the anchor samples (j). h represents the concatenation of the learned representation C and the static features. τ is a scale parameter.
3 EXPERIMENTS SETTINGS
In the experiment, we aim to continuously predict AKI-D patients’ mortality risk in their dialysis/renal replacement therapy (RRT) duration. More specifically, given a period of EHR in dialysis duration before time T , we will continuously predict the mortality risk between T and T + 72h.
3.1 EXPERIMENT DATA
The study population consists of 570 AKI-D adult patients admitted to ICU at the University Hospital from January 2009 to October 2019. Among them, 237 (41.6 %) died before discharge, and
333 (58.4%) survived. Patients are excluded if they were diagnosed with end-stage kidney disease (ESKD) before or at the time of hospital admission, are recipients of a kidney transplant, or has RRT less than 72 or greater than 2,000 hours.
Data features include 12 temporal features (systolic blood pressure, diastolic blood pressure, serum creatinine, bicarbonate, hematocrit, potassium, bilirubin, sodium, temperature, white blood cells (WBC) count, heart rate, and respiratory rate) and 11 static features including demographics and comorbidities (age, race, gender, admission weight, body mass index (BMI), Charlson comorbidity score, diabetes, hypertension, cardiovascular disease, Chronic Kidney Disease, and Sepsis). All outliers (> 97.5% or < 2.5%) were excluded and missing values are imputed with the last observation carried forward (LOCF) method.
To continuously predict mortality risks, we generate 30 samples from each patient’s EHR data with random start and end times as long as the duration is greater than 10 time steps. The class label of a sample is whether the patient died (positive) or survived (negative) in the next 72 hours. From 570 AKI-D patients, 13,333 EHR samples are extracted, including 2,975 positive and 10,358 negative samples. As shown in Table 1, all the EHR samples are split into train (75%), validation (5%), and test data (20%) patient-wise, which ensures that samples from the same patients only appear in one of the three sets. Eighteen subpopulations were considered in this study based on nine domains according to patient demographics (i.e., age, gender, race) and commodities (i.e., Charlson score, diabetes, hypertension, cardiovascular disease, chronic kidney disease, and sepsis).
3.2 BASELINE METHOD AND FAIRNESS PERFORMANCE METRICS
We compared MTATE with two widely used and accessible sequence DL methods, LSTM and Transformer, one well-known EHR-specific representations method, RETAIN, and one pioneer domain-adaptation method, DANN*. For Transformer, the encoder part of the original Transformer is used. For DANN*, we only use the gradient reversal layer from the original DANN to get domaininvariant representation with all other structures the same as MTATE. For all models, the input data are the EHR temporal features, and static features are concatenated with latent representation before the prediction layer, as described in Section 2.6. We evaluate the performance of all models using traditional performance metrics: Area under the ROC Curve (ROCAUC), Accuracy(ACC), Area under the Precision-Recall Curve (PRAUC) as well as three fairness metrics Demographic Parity Difference (DPD), Equality of Opportunity Difference (EOD) and Equalized Odds Difference (EQOD) (Feldman et al., 2015; Hardt et al., 2016; Afrose et al., 2022) (see fairness metric equations 15, 16, 17 in Appendix). We compare all models on both imbalanced and balanced sets. The positive samples (died) and negative samples (survived) are 1:4 and 1:1, respectively.
4 RESULTS AND DISCUSSION
4.1 COMPARISON WITH BASELINE METHODS
The overall performance of rolling mortality prediction in the next 72 hours for all test data with an imbalanced positive to negative ratio are shown in Table 2. We also show the performance on the balanced data in Table A1. Regarding the imbalanced test data, Table 2 shows that MTATE outperforms all the compared baseline methods in almost all metrics. LSTM is the most competitive method since it has the same highest ROCAUC as MTATE and the highest PRAUC. Nevertheless, the fairness scores of LSTM are not as good as MTATE. RETAIN, and Transformer have a moderate
performance. The performance of all the compared algorithms on the balanced test data shows that MTATE has the best ROCAUC, DPD, EOD, and EQOD, and second-best accuracy (see Table A1).
We compare MTATE with all baseline methods within each subpopulation domain. Figure 3 shows that MTATE has the best (lowest) averaged EQOD score. We also compare MTATE with all baselines regarding the difference of PRAUC within each subpopulation domain (e.g., the difference of PRAUC between female and male). The difference in PRAUC for each domain and the averaged score across all domains is in Figure A2. It shows that MTATE has the lowest percentage difference in PRAUC between subpopulations in Age, Gender, and Hypertension domains.
4.2 ABLATION STUDY
We conduct an ablation study to test how each component of MTATE performs by removing them from MTATE. Table 3 shows that MTATE has the best performance for almost all metrics except for ROCAUC, which is the second best to MTATE without masking. However, the accuracy of MTATE without masking is 18% lower than MTATE. In addition, RW-ATT is the most effective component since the performance drops the most in the two ablations that are without RW-ATT.
A primary goal of MTATE is to learn fair representations that can be used by a wide spectrum of downstream predictive models. To this end, we test whether the representation learned from MTATE can be used by traditional machine learning methods. The last three lines in Table 3 shows that all three traditional methods, XGboost, SVM, and Random Forest, have achieved similar performance as MTATE and are better than some of the compared deep learning methods. This comparison confirms that MTATE can serve as a pre-trained EHR data representation generator, and the learned representations can be used by downstream prediction tasks implemented with different classifiers.
4.3 EFFECTIVENESS ASSESSMENT OF RW-ATTENTION
Since RW-Attention is the most effective component in MTATE, we study its relationships with the outcome prediction loss Lp and domain loss Ld, as shown in Figure 4. Each dot in the figure presents the averaged value of all samples from the same subpopulation. The figure shows three example domains in two facets. First, the correlation between the outcome prediction loss and the domain loss could be negatively correlated (Age), positive (CVD), or mixed (Race). In the negative correlation scenario, the higher the domain loss, the lower the outcome prediction loss, which suggests the RW-Attention is putting more weight on the representations with larger domain loss (i.e., domain-invariant representations). In the positively correlated scenario, outcome prediction loss decreases with the decrease of domain loss. It suggests the RW-Attention is putting more weight on the representations with smaller domain loss (domain-specific representations). Second, attention weights, as indicated by the color of the dots in the figure, demonstrate the relationship between RWAttention and the outcome prediction loss. The darker colored dots (greater attention) are almost always related to the lower outcome prediction loss, whether the outcome prediction loss and the domain loss are positively or negatively correlated. This indicates that the RW-Attention module can weigh both domain-specific or domain-invariant representations toward lower outcome prediction loss. Similar patterns in all other domains are in Figure A4.
5 CONCLUSION
In this work, we presented MTATE, an attention-based encoder for EHR data. MTATE uses three different attention mechanism (time-relevance, feature-relevance, and representation-wise) to learn unbiased data representations. Experiments on real-world healthcare data demonstrated that MTATE outperforms the compared baseline methods on continuous mortality risk prediction for critically ill AKI-D patients regarding fairness.
A APPENDIX
A.1 FR-ATTENTION
The FR-Attention sub-module for each head is computed as:
Q′,K′,V′ = MUQ,MUK,MUV (10)
AFR = softmax( Q′K ′T√
d′k ) (11)
M′h = AFRV ′ (12)
M′ = Concat(M ′h1 , ...,M ′h i , ...,M ′h N ′h )UO (13)
Z′ = max(0, (M′)TW1 + b1)W2 + b2 (14)
Similar to TR-Attention module, we assume all projection matrix UQ,UK,UV have the same dimension d′k. Thus, UQ,UK,UV ∈ RNt×d ′ k and Q′,K′,V′ ∈ RNl×d′k . The feature-relevance attention AFR ∈ RNl×Nl . The output of each head is M′h ∈ RNl×d′k . Similar to the TR-Attention module, all outputs from all heads are concatenated to form M′ ∈ RNl×Nt , and linear transformation are applied with the projection matrix UO ∈ R(N ′ hd ′ k)×Nt , where N ′h represents the number of head.
Figure A1: A. The structure of FR-Attention module in MTATE. B. The multi-head attention module from the original Transformer used in MTATE.
A.2 FAIRNESS METRICS
Demographic parity suggests that a predictor is unbiased if the prediction is independent of the protected attribute (e.g., Age, Gender, and etc.). We denote protected attribute as A ∈ a, b, and A only take two groups a, b (e.g., Young vs Old for Age) for simplicity. Thus, the Demographic parity difference (DPD) is the difference between the two group a and b The formula for Demographic parity difference (DPD) is shown in below:
DPD = P (ŷ = 1|A = a)− P (ŷ = 1|A = b) (15)
Equality of opportunity suggests that a predictor is unbiased if the true-positive rate between two groups are equal. Similarly, the Equality of opportunity difference (EOD) is the difference between the two group a and b. The formula for EOD is shown in below:
EOD = P (ŷ = 1|y = 1, A = a)− P (ŷ = 1|y = 1, A = b) (16)
Equalized odds suggests that a predictor is unbiased if both the true-positive rate (TPR) and falsepositive rate (FPR) between two groups are equal. We computes the Equalized odds Difference (EQOD) as the average of the difference in both TPR and FPR. The formula for EQOD is shown in below:
EQOD = (TPRD + FPRD)/2 (17) TPRD = P (ŷ = 1|y = 1, A = a)− P (ŷ = 1|y = 1, A = b) (18) FPRD = P (ŷ = 1|y = 0, A = a)− P (ŷ = 1|y = 0, A = b) (19)
A.3 PERFORMANCE COMPARISONS
A.3.1 OVERALL PERFORMANCE
The following table shows the performance comparison between MTATE and baseline methods for the balanced test data.
Table A1: Balanced performance of MTATE and compared algorithms for the mortality prediction in the next 72 hours (pos:neg = 1:1).
Method ROCAUC ACC PRAUC DPD EOD EQOD
Transformer 0.70(0.09) 0.67(0.09) 0.69(0.11) 0.17(0.13) 0.15(0.12) 0.10(0.06) LSTM 0.71(0.11) 0.67(0.11) 0.76(0.12) 0.19(0.12) 0.20(0.12) 0.12(0.05) RETAIN 0.68(0.12) 0.67(0.11) 0.72(0.13) 0.22(0.13) 0.21(0.12) 0.12(0.06) DANN* 0.58(0.12) 0.54(0.14) 0.59(0.12) 0.21(0.19) 0.16(0.13) 0.12(0.08) MTATE (Ours) 0.73(0.09) 0.65(0.11) 0.75(0.11) 0.15(0.10) 0.14(0.11) 0.08(0.05)
A.3.2 FAIRNESS PERFORMANCE WITHIN EACH DOMAIN
Figure A2: The comparison between MTATE with baseline methods for the percentage difference score in PRAUC for each domain. Y-axis represents the percentage difference. X-axis represents the subpopulation domain, each domain consists of two subpopulations (e.g., Young (< 65 y/o) vs. Old in Age domain, Sepsis vs. Non-Sepsis in Sepsis Domain ). CCI stands for charlson comorbidity score, DB stands for diabetes, HT stands for hypertension, CVD stands for cardiovascular disease, CKD stands for chronic kidney disease.
A.4 STUDY OF MASKING RATE
We analysed the effect on the performance by different masking ratio. Figure A3 shows the performance metrics of ROCAUC, PRAUC and accuracy with respect to the masking ratio from 0 to 0.9. Both ROCAUC and PRAUC have a similar trend, both starting with a relative high score and gradually decrease with some punctuation, and both reach lowest scores at masking rate 0.8 and 0.9. In contrast, the accuracy has a opposite trend, where it starts with the lowest score, and almost monotonically increasing. The masking rate is highly dependant on the input data, thus the chosen of masking rate is not universal. For our experimental data, one of the best masking rate is 0.4, where it has the highest accuracy and PRAUC and the third best ROCAUC.
Figure A3: Performance Score of ROCAUC, PRAUC and Accuracy with different masking rate
A.5 RELATIONSHIP BETWEEN OUTCOME LOSS, DOMAIN LOSS AND REPRESENTATION-WISE ATTENTION
Figure A4: Relationship between outcome loss, domain loss and representation-wise attention in all domains. Y-axis represents the outcome loss, x-axis represents the domain loss. The colored dots represent the representation-wise attention, and darker color represents higher attention. | 1. What is the focus and contribution of the paper on fair representation learning?
2. What are the strengths of the proposed approach, particularly in terms of its ability to achieve fairness and prediction performance?
3. What are the weaknesses of the paper regarding the lack of explanations and intuitions about the proposed method?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any concerns or questions regarding the selection of tuning parameters and their impact on the model's results? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
In this paper, the authors propose an attention-based encoder model to learn data representations that are fair for prespecified subpopulations. The proposed method was tested on real data to learn EHR representations and predict 72-hour mortality after dialysis.
Strengths And Weaknesses
The experiment was well conducted. The proposed model has shown advantages in both prediction performance and fairness across prespecified subgroups.
More explanation and intuitions can be provided for the proposed method, for example, the rationale for introducing each model part and their connection with existing methods. Also, the reason why the model can achieve desired fairness can be better described.
Clarity, Quality, Novelty And Reproducibility
The method was presented clearly with formulas, and the experiments were well conducted. It connects domain-specific and domain-invariant representations to achieve fairness and better model prediction performance.
However, the code was not provided for reproducibility purposes. Also, are there any tuning parameters for the proposed method? How were they selected and how much do they affect the model results? |
ICLR | Title
Unbiased Representation of Electronic Health Records for Patient Outcome Prediction
Abstract
Fairness is one of the newly emerging focuses for building trustworthy artificial intelligence (AI) models. One of the reasons resulting in an unfair model is the algorithm bias towards different groups of samples. A biased model may benefit certain groups but disfavor others. As a result, leaving the fairness problem unresolved might have a significant negative impact, especially in the context of healthcare applications. Integrating both domain-specific and domain-invariant representations, we propose a masked triple attention transformer encoder (MTATE) to learn unbiased and fair data representations of different subpopulations. Specifically, MTATE includes multiple domain classifiers and uses three attention mechanisms to effectively learn the representations of diverse subpopulations. In the experiment on real-world healthcare data, MTATE performed the best among the compared models regarding overall performance and fairness.
1 INTRODUCTION
Electronic Health Record (EHR) based clinical risk prediction using temporal machine learning (ML) and deep learning (DL) models benefits clinicians for providing precise and timely interventions to high-risk patients and better-allocating hospital resources (Xiao et al., 2018; Shamout et al., 2020). Nevertheless, a long-standing issue that hinders ML and DL model deployment is the concern about model fairness (Gianfrancesco et al., 2018; Ahmad et al., 2020). Fairness in AI/DL refers to a model’s ability to make a prediction or decision without any bias against any individual or group (Mehrabi et al., 2021). The behaviors of a biased model often result in two facets: it performs significantly better in certain populations than the others (Parikh et al., 2019), and it makes inequities decisions towards different groups (Panch et al., 2019). Clinical decision-making based upon biased predictions may cause delayed treatment plans for patients in minority groups or misspend healthcare resources where treatment is unnecessary (Gerke et al., 2020).
The data distribution shift problem across different domains is one of the major reasons a model could be biased (Adragna et al., 2020). To address the fairness issue, domain adaptation methods have been developed. The main idea is to learn invariant hidden features across different domains, such that a model would perform similarly no matter to which domain the test cases belong. Pioneer domain adaptation models, including DANN (Ganin et al., 2016), VARADA (Purushotham et al., 2017), and VRNN (Chung et al., 2015), learn invariant hidden features by adding a domain classifier and using a gradient reversal layer to maximize the domain classifier’s loss. In return, the learned hidden features are indifferent across domains. Recent work MS-ADS (Khoshnevisan & Chi, 2021) has shown robust performance across minority racial groups by maximizing the distance between the globally-shared presentations with individual local representations of every domain, which effectively consolidates the invariant globally-shared representations across domains. However, it is difficult to align large domain shifts and model complex domain shifts across multiple overlapped domains.
Alternatively, the data distribution shift problem could be addressed using domain-specific bias correction approaches. A recent study showed that features strongly associated with the outcome of interest could be subpopulation-specific (Chouldechova & Roth, 2018). It indicates that lumping together all features from patients with different backgrounds might bury unique domain-specific
characteristics. Afrose et al. proposed to use double prioritized bias correction to train multiple candidate models for different demographic groups (Afrose et al., 2022). Similarly, AC-TPC Lee & Van Der Schaar (2020) and CAMELOT Aguiar et al. (2022) used clustering algorithms to generate representations of patients with similar backgrounds and use cluster-specific representations for the outcome prediction.
In summary, both domain adaptation and domain-specific bias correction approaches address the same fairness issue with different assumptions about the relationships between latent representation and the prediction outcome. The former believes that performance variation across domains would be benefited from invariant feature representation, while the latter affirms domain-specific representations. It remains unclear whether domain-invariant and domain-specific data representation should be used for a prediction task.
To better address the fairness issue, we propose an adaptive multi-task learning algorithm, called MTATE (i.e. Masked Triple Attention Transformer Encoder), to automatically learn and select the optimal and fair data representations instead of explicitly choosing domain adaptation or domainspecific bias correction. Under this setting, both invariant and domain-specific representations are special cases where one of the approaches dominates the data representation. The purpose of MTATE is to generate multiple masked representations of the same data that are attended by both time-wise attention and multiple feature-wise attentions in parallel, where each masked representation corresponds to a specific domain classification task. For example, one of the domain classifiers breaks the patient cohort into subpopulations defined by race, and another classifier is focused on gender. The learned EHR representations could be domain-specific, domain-invariant, or the mix of the two reflected by the domain classification loss values. A low loss value indicates the representation is domain-specific, and a high loss value indicates domain-invariant. The model will compute the representation-wise attention for each individual testing case, leading to personalized data representation for downstream predictive tasks. The overall framework of MTATE is shown in Figure 1. The primary goal of MTATE is to learn an unbiased representation to make fair and precise patient outcome predictions in a real-world healthcare setting.
To demonstrate the effectiveness of MTATE, we focus on rolling mortality predictions for patients with Acute Kidney Injury requiring Dialysis (AKI-D), a severe complication for critically ill patients, with a high in-hospital mortality rate (Lee & Son, 2020). The clinical risk classification for AKI-D patients is challenging due to complex subphenotypes and treatment exposures (Neyra & Nadkarni, 2021; Vaara et al., 2022). There is an urgent need to develop actionable approaches to account for patients’ backgrounds and subpopulations for personalized medicine and improve patients-centered outcomes (Chang et al., 2022).
The contributions of this work are three-fold: 1) To the best of our knowledge, MTATE is the first model that integrates both domain-specific and domain-invariant features in one model. It trains the fair representation and predicts the downstream task altogether, which is critical for reliable clinical outcome predictions for patients with different demographics and clinical backgrounds; 2) MTATE employs time-wise, feature-wise, and representation-wise attention mechanisms to compose data representations for downstream prediction tasks dynamically; and 3) MTATE effectively mitigated the bias towards different subpopulations in the rolling mortality prediction tasks on AKI-D patients and achieved the best performance compared to baselines.
2 METHOD
MTATE consists of five components, and the detailed architecture is shown in Figure 2. The first component is a temporal-relevance attention (TR-Attention) module to generate time-wise attention that associates each time step with the other time steps considering all features. The output is a time-attended representation. The second component is a domain-specific feature-relevance attention module to generate feature-wise attentions that associate each feature with the other features considering all the time steps. The outputs are multiple feature-attended representations, one for each domain. The third component is a module consists multiple domain classifiers, and each classifier classifies each feature-attended representation into a predefined domain. The fourth module is a unified data representation module, which uses the representation-wise attention to aggregate feature-attended representations (domain-invariant or domain-specific) to a final representation. The last module is an outcome prediction module where the final representation is used for patient outcome prediction.
2.1 NOTATIONS
A patient’s EHR data can be represented as X = {x1,x2, ...,xt, ...,xNt}, X ∈ RNt×Nf , where Nf is the number of features and Nt is the number of time steps. xt ∈ R1×Nf represents a vector of clinical parameters (e.g., heart rate, blood pressure, etc.) at time step t. We consider a binary outcome and domain classification problem in this study. The patient domain class labels are denoted as dy ∈ RNd , where Nd represents the total number of domains, dyi ∈ {0, 1} represents the label for the i-th domain, 1 and 0 represent whether a given patient falls in the target domain or not, respectively. The patient outcome label is denoted as y ∈ {0, 1}, where 1 and 0 represent death and alive before hospital discharge.
2.2 TEMPORAL RELEVANCE ATTENTION
The temporal-relevance attention (TR-Attention) module aims to learn the relationships between each time step to other time steps considering all features at each time step. We first use the position
encoding from the original Transformer to encode the relative position information to the input X and use the multi-head attention mechanism to learn the temporal-relevance attention.
Specifically, query, key, value vectors (Q,K,V) are the linear projections of all features at every time steps X. Thus, the attention weights computed from the query and key represent how much focus all features at one single time step is associated with themselves at other time steps. Then, the output of each head Zh is the multiplication of value vectors and time-wise attention ATR. The final output of the TR-Attention module Z ∈ RNt×Nf is the linear transformation of the concatenation of the output of every head. Lastly, the residual connection and and layer normalization are applied to Z, denoted as Z = LayerNorm(Z + X). The temporal-relevance attention of each head is represented as:
Q,K,V = XWQ,XWK,XWV (1)
ATR = softmax( QKT√
dk ) (2)
Zh = ATRV (3)
Z = Concat(Zh1 , ..., Z h i , ..., Z h Nh )WO (4)
For simplicity, we assume all projection matrices WQ,WK,WV have the same dimension dk. Thus, WQ,WK,WV ∈ RNf×dk , Q,K,V ∈ RNt×dk , the temporal relevance attention is ATR ∈ RNt×Nt , the output of each head is Zh ∈ RNt×dk . The projection matrix for the final output is WO ∈ R(Nhdk)×Nf , where Nh represents the number of heads.
2.3 DOMAIN-SPECIFIC FEATURE RELEVANCE ATTENTION
The domain-specific feature relevance attention module aims to learn each domain’s diverse and unique latent representation. The module includes a set of parallel sub-modules called featurerelevance attention (FR-Attention), where each FR-Attention module focuses on the representation of one specific domain. Since features for the different domains are not equally important, we randomly masked out some number of latent features along all time steps differently for each FRAttention module. In return, the masking procedure forces each sub-module to learn different feature focuses for each domain to generate unique domain representations. Then feature-wise attention is computed using the multi-head attention similar to the TR-Attention model. FR-Attention module learns the relationships between each latent feature and the others considering all time steps.
The input of each FR-Attention sub-module is ZT ∈ RNf×Nt , which is the transposed output of the TR-Attention module Z. Then, ZT is passed through a masking layer, in which MR×Nf number of latent features are randomly selected and removed from ZT , where MR represents the masking rate. We denote the masked input of each sub-module as M ∈ RNl×Nt , where Nl represents the number of features after masking. M is passed through the multi-head attention block as well as the residual connection and layer normalization. Finally, M is transposed back to the original form. Then it is passed through a point-wise feed-forward network (FFN) as well as the residual connection and layer normalization to get the final output, denoted as Z′ ∈ RNt×Nl . Please see the detailed architecture and formula for FR-Attention in Appendix Section A.1.
2.4 DOMAIN CLASSIFIER
Multiple domain classifiers are used to classify patients into subpopulations based on the latent representation from FR-Attention sub-module. The input of one domain classifier is Z′i ∈ RNt×Nl , where i denotes the index of the sub-module or domain. Z′i is flattened by taking the max along the time-dimension, then followed by a linear layer with a sigmoid function for binary classification. We use binary cross-entropy for the domain classification loss, denoted as Ldi . The domain classifier module assists to learn latent representation for each domain and the domain classification loss is used for generating representation-wise attention in the next module.
2.5 DOMAIN-FOCUSED REPRESENTATION AND REPRESENTATION-WISE ATTENTION
While a domain-specific representation module is focused on representing its target domain, the resulting representation from each domain can be domain-specific or domain-invariant according to the domain loss. However, not all representations are equally crucial to outcome prediction. Thus, this module aims to generate the final representation for the outcome prediction considering both domain-specific and domain-invariant representations and their corresponding domain loss. We call this module a domain-focused representation module since both domain-specific and domaininvariant are candidate representations.
The input to this module is a transformed version of the latent representation generated from each FR-Attention module Z′i. Every Z ′ i gets transformed to its original dimension by adding the masked(removed) feature back with values of 0 so that all latent representations can be aligned in feature space. We denoted this particular form of latent representation as Zoi . Z o i is flattened by taking the max along the time dimension, and all flattened vectors are concatenated together, denoted as E ∈ RNd×Nf . The final representation C ∈ RNf×1 is the weighted sum of all the candidate representations, where the weights a ∈ RNd×1 are the representation-wise attention (RW-Attention), which is computed based on E and the domain prediction loss Ld ∈ RNd×1 as following:
a = softmax(tanh(Concat(E,Ld)UA)WA) (5)
Cj = Nd∑ i=1 aiEi,j (6)
where UA ∈ R(Nf+1)×da and WA ∈ Rda×1 are the projection matrices, i represents the domain index, j represents the feature index.
2.6 PATIENT OUTCOME PREDICTION
The final representation C of EHR is concatenated with all static features, such as demographics and comorbidity, followed by a linear layer with a sigmoid function for the outcome binary prediction. Let the patient outcome label be y and the predicted label be ŷ, and we use the binary cross entropy as the part of the final loss, denoted as Lp. We also constructed the supervised contrastive loss Khosla et al. (2020) to mitigate further the model bias as another part of the final loss. The contrastive loss is denoted as Lc. The final prediction loss L is:
L = Lp + Lc (7)
Lp = Ns∑ i=1 −(yi log(ŷi) + (1− yi) log(1− ŷi)) (8)
Lc = Ns∑ j=1 −1 Np Np∑ p=1 log exp(hj ∗ hp/τ)∑Na a=1 exp(hj ∗ ha/τ)
(9)
where Ns,Np, Na represents the number of all samples, the number of samples having the same labels as the anchor samples (j), and the number of samples having the opposite label to the anchor samples (j). h represents the concatenation of the learned representation C and the static features. τ is a scale parameter.
3 EXPERIMENTS SETTINGS
In the experiment, we aim to continuously predict AKI-D patients’ mortality risk in their dialysis/renal replacement therapy (RRT) duration. More specifically, given a period of EHR in dialysis duration before time T , we will continuously predict the mortality risk between T and T + 72h.
3.1 EXPERIMENT DATA
The study population consists of 570 AKI-D adult patients admitted to ICU at the University Hospital from January 2009 to October 2019. Among them, 237 (41.6 %) died before discharge, and
333 (58.4%) survived. Patients are excluded if they were diagnosed with end-stage kidney disease (ESKD) before or at the time of hospital admission, are recipients of a kidney transplant, or has RRT less than 72 or greater than 2,000 hours.
Data features include 12 temporal features (systolic blood pressure, diastolic blood pressure, serum creatinine, bicarbonate, hematocrit, potassium, bilirubin, sodium, temperature, white blood cells (WBC) count, heart rate, and respiratory rate) and 11 static features including demographics and comorbidities (age, race, gender, admission weight, body mass index (BMI), Charlson comorbidity score, diabetes, hypertension, cardiovascular disease, Chronic Kidney Disease, and Sepsis). All outliers (> 97.5% or < 2.5%) were excluded and missing values are imputed with the last observation carried forward (LOCF) method.
To continuously predict mortality risks, we generate 30 samples from each patient’s EHR data with random start and end times as long as the duration is greater than 10 time steps. The class label of a sample is whether the patient died (positive) or survived (negative) in the next 72 hours. From 570 AKI-D patients, 13,333 EHR samples are extracted, including 2,975 positive and 10,358 negative samples. As shown in Table 1, all the EHR samples are split into train (75%), validation (5%), and test data (20%) patient-wise, which ensures that samples from the same patients only appear in one of the three sets. Eighteen subpopulations were considered in this study based on nine domains according to patient demographics (i.e., age, gender, race) and commodities (i.e., Charlson score, diabetes, hypertension, cardiovascular disease, chronic kidney disease, and sepsis).
3.2 BASELINE METHOD AND FAIRNESS PERFORMANCE METRICS
We compared MTATE with two widely used and accessible sequence DL methods, LSTM and Transformer, one well-known EHR-specific representations method, RETAIN, and one pioneer domain-adaptation method, DANN*. For Transformer, the encoder part of the original Transformer is used. For DANN*, we only use the gradient reversal layer from the original DANN to get domaininvariant representation with all other structures the same as MTATE. For all models, the input data are the EHR temporal features, and static features are concatenated with latent representation before the prediction layer, as described in Section 2.6. We evaluate the performance of all models using traditional performance metrics: Area under the ROC Curve (ROCAUC), Accuracy(ACC), Area under the Precision-Recall Curve (PRAUC) as well as three fairness metrics Demographic Parity Difference (DPD), Equality of Opportunity Difference (EOD) and Equalized Odds Difference (EQOD) (Feldman et al., 2015; Hardt et al., 2016; Afrose et al., 2022) (see fairness metric equations 15, 16, 17 in Appendix). We compare all models on both imbalanced and balanced sets. The positive samples (died) and negative samples (survived) are 1:4 and 1:1, respectively.
4 RESULTS AND DISCUSSION
4.1 COMPARISON WITH BASELINE METHODS
The overall performance of rolling mortality prediction in the next 72 hours for all test data with an imbalanced positive to negative ratio are shown in Table 2. We also show the performance on the balanced data in Table A1. Regarding the imbalanced test data, Table 2 shows that MTATE outperforms all the compared baseline methods in almost all metrics. LSTM is the most competitive method since it has the same highest ROCAUC as MTATE and the highest PRAUC. Nevertheless, the fairness scores of LSTM are not as good as MTATE. RETAIN, and Transformer have a moderate
performance. The performance of all the compared algorithms on the balanced test data shows that MTATE has the best ROCAUC, DPD, EOD, and EQOD, and second-best accuracy (see Table A1).
We compare MTATE with all baseline methods within each subpopulation domain. Figure 3 shows that MTATE has the best (lowest) averaged EQOD score. We also compare MTATE with all baselines regarding the difference of PRAUC within each subpopulation domain (e.g., the difference of PRAUC between female and male). The difference in PRAUC for each domain and the averaged score across all domains is in Figure A2. It shows that MTATE has the lowest percentage difference in PRAUC between subpopulations in Age, Gender, and Hypertension domains.
4.2 ABLATION STUDY
We conduct an ablation study to test how each component of MTATE performs by removing them from MTATE. Table 3 shows that MTATE has the best performance for almost all metrics except for ROCAUC, which is the second best to MTATE without masking. However, the accuracy of MTATE without masking is 18% lower than MTATE. In addition, RW-ATT is the most effective component since the performance drops the most in the two ablations that are without RW-ATT.
A primary goal of MTATE is to learn fair representations that can be used by a wide spectrum of downstream predictive models. To this end, we test whether the representation learned from MTATE can be used by traditional machine learning methods. The last three lines in Table 3 shows that all three traditional methods, XGboost, SVM, and Random Forest, have achieved similar performance as MTATE and are better than some of the compared deep learning methods. This comparison confirms that MTATE can serve as a pre-trained EHR data representation generator, and the learned representations can be used by downstream prediction tasks implemented with different classifiers.
4.3 EFFECTIVENESS ASSESSMENT OF RW-ATTENTION
Since RW-Attention is the most effective component in MTATE, we study its relationships with the outcome prediction loss Lp and domain loss Ld, as shown in Figure 4. Each dot in the figure presents the averaged value of all samples from the same subpopulation. The figure shows three example domains in two facets. First, the correlation between the outcome prediction loss and the domain loss could be negatively correlated (Age), positive (CVD), or mixed (Race). In the negative correlation scenario, the higher the domain loss, the lower the outcome prediction loss, which suggests the RW-Attention is putting more weight on the representations with larger domain loss (i.e., domain-invariant representations). In the positively correlated scenario, outcome prediction loss decreases with the decrease of domain loss. It suggests the RW-Attention is putting more weight on the representations with smaller domain loss (domain-specific representations). Second, attention weights, as indicated by the color of the dots in the figure, demonstrate the relationship between RWAttention and the outcome prediction loss. The darker colored dots (greater attention) are almost always related to the lower outcome prediction loss, whether the outcome prediction loss and the domain loss are positively or negatively correlated. This indicates that the RW-Attention module can weigh both domain-specific or domain-invariant representations toward lower outcome prediction loss. Similar patterns in all other domains are in Figure A4.
5 CONCLUSION
In this work, we presented MTATE, an attention-based encoder for EHR data. MTATE uses three different attention mechanism (time-relevance, feature-relevance, and representation-wise) to learn unbiased data representations. Experiments on real-world healthcare data demonstrated that MTATE outperforms the compared baseline methods on continuous mortality risk prediction for critically ill AKI-D patients regarding fairness.
A APPENDIX
A.1 FR-ATTENTION
The FR-Attention sub-module for each head is computed as:
Q′,K′,V′ = MUQ,MUK,MUV (10)
AFR = softmax( Q′K ′T√
d′k ) (11)
M′h = AFRV ′ (12)
M′ = Concat(M ′h1 , ...,M ′h i , ...,M ′h N ′h )UO (13)
Z′ = max(0, (M′)TW1 + b1)W2 + b2 (14)
Similar to TR-Attention module, we assume all projection matrix UQ,UK,UV have the same dimension d′k. Thus, UQ,UK,UV ∈ RNt×d ′ k and Q′,K′,V′ ∈ RNl×d′k . The feature-relevance attention AFR ∈ RNl×Nl . The output of each head is M′h ∈ RNl×d′k . Similar to the TR-Attention module, all outputs from all heads are concatenated to form M′ ∈ RNl×Nt , and linear transformation are applied with the projection matrix UO ∈ R(N ′ hd ′ k)×Nt , where N ′h represents the number of head.
Figure A1: A. The structure of FR-Attention module in MTATE. B. The multi-head attention module from the original Transformer used in MTATE.
A.2 FAIRNESS METRICS
Demographic parity suggests that a predictor is unbiased if the prediction is independent of the protected attribute (e.g., Age, Gender, and etc.). We denote protected attribute as A ∈ a, b, and A only take two groups a, b (e.g., Young vs Old for Age) for simplicity. Thus, the Demographic parity difference (DPD) is the difference between the two group a and b The formula for Demographic parity difference (DPD) is shown in below:
DPD = P (ŷ = 1|A = a)− P (ŷ = 1|A = b) (15)
Equality of opportunity suggests that a predictor is unbiased if the true-positive rate between two groups are equal. Similarly, the Equality of opportunity difference (EOD) is the difference between the two group a and b. The formula for EOD is shown in below:
EOD = P (ŷ = 1|y = 1, A = a)− P (ŷ = 1|y = 1, A = b) (16)
Equalized odds suggests that a predictor is unbiased if both the true-positive rate (TPR) and falsepositive rate (FPR) between two groups are equal. We computes the Equalized odds Difference (EQOD) as the average of the difference in both TPR and FPR. The formula for EQOD is shown in below:
EQOD = (TPRD + FPRD)/2 (17) TPRD = P (ŷ = 1|y = 1, A = a)− P (ŷ = 1|y = 1, A = b) (18) FPRD = P (ŷ = 1|y = 0, A = a)− P (ŷ = 1|y = 0, A = b) (19)
A.3 PERFORMANCE COMPARISONS
A.3.1 OVERALL PERFORMANCE
The following table shows the performance comparison between MTATE and baseline methods for the balanced test data.
Table A1: Balanced performance of MTATE and compared algorithms for the mortality prediction in the next 72 hours (pos:neg = 1:1).
Method ROCAUC ACC PRAUC DPD EOD EQOD
Transformer 0.70(0.09) 0.67(0.09) 0.69(0.11) 0.17(0.13) 0.15(0.12) 0.10(0.06) LSTM 0.71(0.11) 0.67(0.11) 0.76(0.12) 0.19(0.12) 0.20(0.12) 0.12(0.05) RETAIN 0.68(0.12) 0.67(0.11) 0.72(0.13) 0.22(0.13) 0.21(0.12) 0.12(0.06) DANN* 0.58(0.12) 0.54(0.14) 0.59(0.12) 0.21(0.19) 0.16(0.13) 0.12(0.08) MTATE (Ours) 0.73(0.09) 0.65(0.11) 0.75(0.11) 0.15(0.10) 0.14(0.11) 0.08(0.05)
A.3.2 FAIRNESS PERFORMANCE WITHIN EACH DOMAIN
Figure A2: The comparison between MTATE with baseline methods for the percentage difference score in PRAUC for each domain. Y-axis represents the percentage difference. X-axis represents the subpopulation domain, each domain consists of two subpopulations (e.g., Young (< 65 y/o) vs. Old in Age domain, Sepsis vs. Non-Sepsis in Sepsis Domain ). CCI stands for charlson comorbidity score, DB stands for diabetes, HT stands for hypertension, CVD stands for cardiovascular disease, CKD stands for chronic kidney disease.
A.4 STUDY OF MASKING RATE
We analysed the effect on the performance by different masking ratio. Figure A3 shows the performance metrics of ROCAUC, PRAUC and accuracy with respect to the masking ratio from 0 to 0.9. Both ROCAUC and PRAUC have a similar trend, both starting with a relative high score and gradually decrease with some punctuation, and both reach lowest scores at masking rate 0.8 and 0.9. In contrast, the accuracy has a opposite trend, where it starts with the lowest score, and almost monotonically increasing. The masking rate is highly dependant on the input data, thus the chosen of masking rate is not universal. For our experimental data, one of the best masking rate is 0.4, where it has the highest accuracy and PRAUC and the third best ROCAUC.
Figure A3: Performance Score of ROCAUC, PRAUC and Accuracy with different masking rate
A.5 RELATIONSHIP BETWEEN OUTCOME LOSS, DOMAIN LOSS AND REPRESENTATION-WISE ATTENTION
Figure A4: Relationship between outcome loss, domain loss and representation-wise attention in all domains. Y-axis represents the outcome loss, x-axis represents the domain loss. The colored dots represent the representation-wise attention, and darker color represents higher attention. | 1. What is the focus and contribution of the paper on transformer-encoder framework for EHR data?
2. What are the strengths of the proposed approach, particularly in terms of its soundness and relevance to an important problem?
3. What are the weaknesses of the paper regarding its comparisons with other works and lack of code provision?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any specific questions or concerns regarding the paper's methodology or results? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The authors present a transformer-encoder framework for multipleThe authors present a encoder/transformer method for learning representations of EHR data, for multiple prediction tasks. They demonstrate that their method out-performs other DL methods on one prediction task using MIMIC-III data.
Strengths And Weaknesses
Strengths:
This paper is well written
it focuses on an important problem (they also motivate the problem very well)
the methods seem sound, though I am not an expert
the experiments use relevant data, and the results demonstrate that their method works well
their experiments use public data
Weaknesses
There has been a lot of work on this topic, and the authors don't cite or compare with existing methods. Here is a short list of papers that look relevant to this topic, but were not mentioned or compared with in the paper: -- https://doi.org/10.1109/ICDM50108.2020.00050 -- https://doi.org/10.1145/3394486.3403129 -- https://doi.org/10.1109/ICDM51629.2021.00060 -- here is a relevant review: https://doi.org/10.1016/j.jbi.2020.103671
there is no comparison against state-of-the-art risk prediction models in medicine, or any non-DL models at all. there are several such models used in medicine (which don't use DL), which would be good baselines to compare against here.
the authors don't provide any code
Clarity, Quality, Novelty And Reproducibility
Clarity
this paper is very clear
Quality
the methods and experiments all seem sound
Novelty
I think there are issues with the novelty of this work (see "weaknesses" above)
Reproducibility
The authors don't mention that they will share code, but their explanations of their methods are fairly clear, so I think someone could independently reproduce their method. |
ICLR | Title
Unbiased scalable softmax optimization
Abstract
Recent neural network and language models have begun to rely on softmax distributions with an extremely large number of categories. In this context calculating the softmax normalizing constant is prohibitively expensive. This has spurred a growing literature of efficiently computable but biased estimates of the softmax. In this paper we present the first two unbiased algorithms for maximizing the softmax likelihood whose work per iteration is independent of the number of classes and datapoints (and does not require extra work at the end of each epoch). We compare our unbiased methods’ empirical performance to the state-of-the-art on seven real world datasets, where they comprehensively outperform all competitors.
1 INTRODUCTION
Under the softmax model1 the probability that a random variable y takes on the label ` ∈ {1, ...,K}, is given by
p(y = `|x;W ) = e x>w`∑K
k=1 e x>wk
, (1)
where x ∈ RD is the covariate, wk ∈ RD is the vector of parameters for the k-th class, and W = [w1, w2, ..., wK ] ∈ RD×K is the parameter matrix. Given a dataset of N label-covariate pairs D = {(yi, xi)}Ni=1, the ridge-regularized maximum log-likelihood problem is given by
L(W ) = N∑ i=1 x>i wyi − log( K∑ k=1 ex > i wk)− µ 2 ‖W‖22, (2)
where ‖W‖2 denotes the Frobenius norm. This paper focusses on how to maximize (2) when N,K,D are all large. Having large N,K,D is increasingly common in modern applications such as natural language processing and recommendation systems, where N,K,D can each be on the order of millions or billions (Partalas et al., 2015; Chelba et al., 2013; Bhatia et al.).
A natural approach to maximizing L(W ) with large N,K,D is to use Stochastic Gradient Descent (SGD), sampling a mini-batch of datapoints each iteration. However if K,D are large then the O(KD) cost of calculating the normalizing sum ∑K k=1 e
x>i wk in the stochastic gradients can still be prohibitively expensive. Several approximations that avoid calculating the normalizing sum have been proposed to address this difficulty. These include tree-structured methods (Bengio et al., 2003; Daume III et al., 2016; Grave et al., 2016), sampling methods (Bengio & Senécal, 2008; Mnih & Teh, 2012; Joshi et al., 2017) and self-normalization (Andreas & Klein, 2015). Alternative models such as the spherical family of losses (de Brébisson & Vincent, 2015; Vincent et al., 2015) that do not require normalization have been proposed to sidestep the issue entirely (Martins & Astudillo, 2016). Krishnapuram et al. (2005) avoid calculating the sum using a maximization-majorization approach based on lower-bounding the eigenvalues of the Hessian matrix. All2 of these approximations are computationally tractable for largeN,K,D, but are unsatisfactory in that they are biased and do not converge to the optimal W ∗ = argmaxL(W ).
1Also known as the multinomial logit model. 2The method of Krishnapuram et al. (2005) does converge to the optimal MLE, but has O(ND) runtime
per iteration which is not feasible for large N,D.
Recently Raman et al. (2016) managed to recast (2) as a double-sum overN andK. This formulation is amenable to SGD that samples both a datapoint and class each iteration, reducing the per iteration cost to O(D). The problem is that vanilla SGD when applied to this formulation is unstable, in that the gradients suffer from high variance and are susceptible to computational overflow. Raman et al. (2016) deal with this instability by occasionally calculating the normalizing sum for all datapoints at a cost of O(NKD). Although this achieves stability, its high cost nullifies the benefit of the cheap O(D) per iteration cost.
The goal of this paper is to develop robust SGD algorithms for optimizing double-sum formulations of the softmax likelihood. We develop two such algorithms. The first is a new SGD method called U-max, which is guaranteed to have bounded gradients and converge to the optimal solution of (2) for all sufficiently small learning rates. The second is an implementation of Implicit SGD, a stochastic gradient method that is known to be more stable than vanilla SGD and yet has similar convergence properties (Toulis et al., 2016). We show that the Implicit SGD updates for the doublesum formulation can be efficiently computed and has a bounded step size, guaranteeing its stability.
We compare the performance of U-max and Implicit SGD to the (biased) state-of-the-art methods for maximizing the softmax likelihood which cost O(D) per iteration. Both U-max and Implicit SGD outperform all other methods. Implicit SGD has the best performance with an average log-loss 4.29 times lower than the previous state-of-the-art.
In summary, our contributions in this paper are that we:
1. Provide a simple derivation of the softmax double-sum formulation and identify why vanilla SGD is unstable when applied to this formulation (Section 2).
2. Propose the U-max algorithm to stabilize the SGD updates and prove its convergence (Section 3.1).
3. Derive an efficient Implicit SGD implementation, analyze its runtime and bound its step size (Section 3.2).
4. Conduct experiments showing that both U-max and Implicit SGD outperform the previous state-of-the-art, with Implicit SGD having the best performance (Section 4).
2 CONVEX DOUBLE-SUM FORMULATION
2.1 DERIVATION OF DOUBLE-SUM
In order to have an SGD method that samples both datapoints and classes each iteration, we need to represent (2) as a double-sum over datapoints and classes. We begin by rewriting (2) in a more convenient form,
L(W ) = N∑ i=1 − log(1 + ∑ k 6=yi ex > i (wk−wyi ))− µ 2 ‖W‖22. (3)
The key to converting (3) into its double-sum representation is to express the negative logarithm using its convex conjugate:
− log(a) = max v<0 {av − (− log(−v)− 1)} = max u {−u− exp(−u)a+ 1} (4)
where u = − log(−v) and the optimal value of u is u∗(a) = log(a). Applying (4) to each of the logarithmic terms in (3) yields
L(W ) = N∑ i=1 max ui∈R {−ui − e−ui(1 + ∑ k 6=yi ex > i (wk−wyi )) + 1} − µ 2 ‖W‖22
= −min u≥0 {f(u,W )}+N,
where
f(u,W ) = N∑ i=1 ∑ k 6=yi ui + e −ui K − 1 + ex > i (wk−wyi )−ui + µ 2 ‖W‖22 (5)
is our double-sum representation that we seek to minimize and the optimal solution for ui is u∗i (W ) = log(1 + ∑ k 6=yi e
x>i (wk−wyi )) ≥ 0. Clearly f is a jointly convex function in u and W . In Appendix A we prove that the optimal value of u and W is contained in a compact convex set and that f is strongly convex within this set. Thus performing projected-SGD on f is guaranteed to converge to a unique optimum with a convergence rate of O(1/T ) where T is the number of iterations (Lacoste-Julien et al., 2012).
2.2 INSTABILITY OF VANILLA SGD
The challenge in optimizing f using SGD is that it can have problematically large magnitude gradients. Observe that f = Eik[fik] where i ∼ unif({1, ..., N}), k ∼ unif({1, ...,K} − {yi}) and
fik(u,W ) = N ( ui + e −ui + (K − 1)ex > i (wk−wyi )−ui) ) + µ
2 (βyi‖wyi‖22 + βk‖wk‖22), (6)
where βj = Nnj+(N−nj)(K−1) is the inverse of the probability of class j being sampled either through i or k, and nj = |{i : yi = j, i = 1, ..., N}|. The corresponding stochastic gradient is:
∇wkfik(u,W ) = N(K − 1)ex > i (wk−wyi )−uixi + µβkwk
∇wyi fik(u,W ) = −N(K − 1)e x>i (wk−wyi )−uixi + µβyiwyi
∇wj /∈{k,yi}fik(u,W ) = 0
∇uifik(u,W ) = −N(K − 1)ex > i (wk−wyi )−ui +N(1− e−ui) (7)
If ui equals its optimal value u∗i (W ) = log(1+ ∑ k 6=yi e x>i (wk−wyi )) then ex > i (wk−wyi )−ui ≤ 1 and the magnitude of the N(K − 1) terms in the stochastic gradient are bounded by N(K − 1)‖xi‖2. However if ui x>i (wk −wyi), then ex > i (wk−wyi )−ui 1 and the magnitude of the gradients can become extremely large.
Extremely large gradients lead to two major problems: (a) the gradients may computationally overflow floating-point precision and cause the algorithm to crash, (b) they result in the stochastic gradient having high variance, which leads to slow convergence3. In Section 4 we show that these problems occur in practice and make vanilla SGD both an unreliable and inefficient method4.
The sampled softmax optimizers in the literature (Bengio & Senécal, 2008; Mnih & Teh, 2012; Joshi et al., 2017) do not have the issue of large magnitude gradients. Their gradients are bounded by N(K−1)‖xi‖2 due to their approximations to u∗i (W ) always being greater than x>i (wk−wyi). For example, in one-vs-each (Titsias, 2016), u∗i (W ) is approximated by log(1 + e
x>i (wk−wyi )) > x>i (wk−wyi). However, as they only approximate u∗i (W ) they cannot converge to the optimalW ∗. The goal of this paper is to design reliable and efficient SGD algorithms for optimizing the doublesum formulation f(u,W ) in (5). We propose two such methods: U-max (Section 3.1) and an implementation of Implicit SGD (Section 3.2). But before we introduce these methods we should establish that f is a good choice for the double-sum formulation.
2.3 CHOICE OF DOUBLE-SUM FORMULATION
The double-sum in (5) is different to that of Raman et al. (2016). Their formulation can be derived by applying the convex conjugate substitution to (2) instead of (3). The resulting equations are L(W ) = minū { 1 N ∑N i=1 1 K−1 ∑ k 6=yi f̄ik(ū,W ) } +N where
f̄ik(ū,W ) = N ( ūi − x>i wyi + ex > i wyi−ūi + (K − 1)ex > i wk−ūi ) + µ
2 (βyi‖wyi‖22 + βk‖wk‖22)
(8) 3The convergence rate of SGD is inversely proportional to the second moment of its gradients (LacosteJulien et al., 2012). 4The same problems arise if we approach optimizing (3) via stochastic composition optimization (Wang et al., 2016). As is shown in Appendix B, stochastic composition optimization yields near-identical expressions for the stochastic gradients in (7) and has the same stability issues.
and the optimal solution for ūi is ū∗i (W ∗) = log( ∑K k=1 e x>i w ∗ k).
Although both double-sum formulations can be used as a basis for SGD, our formulation tends to have smaller magnitude stochastic gradients and hence faster convergence. To see this, note that typically x>i wyi = argmaxk{x>i wk} and so the ūi, x>i wyi and ex > i wyi−ūi terms in (8) are of the greatest magnitude. Although at optimality these terms should roughly cancel, this will not be the case during the early stages of optimization, leading to stochastic gradients of large magnitude. In contrast the function fik in (6) only has x>i wyi appearing as a negative exponent, and so if x > i wyi is large then the magnitude of the stochastic gradients will be small. In Section 4 we present numerical results confirming that our double-sum formulation leads to faster convergence.
3 STABLE SGD METHODS
3.1 U-MAX METHOD
As explained in Section 2.2, vanilla SGD has large gradients when ui x>i (wk − wyi). This can only occur when ui is less than its optimum value for the current W , since u∗i (W ) = log(1 +∑ j 6=yi e x>i (wk−wyi )) ≥ x>i (wk − wyi). A simple remedy is to set ui = log(1 + ex > i (wk−wyi )) whenever ui x>i (wk − wyi). Since log(1 + ex > i (wk−wyi )) > x>i (wk − wyi) this guarantees that ui > x > i (wk − wyi) and so the gradients are bounded. It also brings ui closer5 to its optimal value for the current W and thereby decreases the the objective f(u,W ).
This is exactly the mechanism behind the U-max algorithm — see Algorithm 1 in Appendix C for its pseudocode. U-max is the same as vanilla SGD except for two modifications: (a) ui is set equal to log(1 + ex > i (wk−wyi )) whenever ui ≤ log(1 + ex > i (wk−wyi )) − δ for some threshold δ > 0, (b) ui is projected onto [0, Bu], and W onto {W : ‖W‖2 ≤ BW }, where Bu and BW are set so that the optimal u∗i ∈ [0, Bu] and the optimal W ∗ satisfies ‖W ∗‖2 ≤ BW . See Appendix A for more details on how to set Bu and BW .
Theorem 1. Let Bf = max‖W‖22≤B2W ,0≤u≤Bu, maxik ‖∇fik(u,W )‖2. Suppose a learning rate ηt ≤ δ2/(4B2f ), then U-max with threshold δ converges to the optimum of (2), and the rate is at least as fast as SGD with same learning rate, in expectation.
Proof. The proof is provided in Appendix D.
U-max directly resolves the problem of extremely large gradients. Modification (a) ensures that δ ≥ x>i (wk − wyi) − ui (otherwise ui would be increased to log(1 + ex > i (wk−wyi ))) and so the magnitude of the U-max gradients are bounded above by N(K − 1)eδ‖xi‖2. In U-max there is a trade-off between the gradient magnitude and learning rate that is controlled by δ. For Theorem 1 to apply we require that the learning rate ηt ≤ δ2/(4B2f ). A small δ yields small magnitude gradients, which makes convergence fast, but necessitates a small ηt, which makes convergence slow.
3.2 IMPLICIT SGD
Another method that solves the large gradient problem is Implicit SGD6 (Bertsekas, 2011; Toulis et al., 2016). Implicit SGD uses the update equation
θ(t+1) = θ(t) − ηt∇f(θ(t+1), ξt), (9)
where θ(t) is the value of the tth iterate, f is the function we seek to minimize and ξt is a random variable controlling the stochastic gradient such that ∇f(θ) = Eξt [∇f(θ, ξt)]. The update (9) differs from vanilla SGD in that θ(t+1) appears on both the left and right side of the equation,
5Since ui < x>i (wk − wyi) < log(1 + ex > i (wk−wyi )) < log(1 + ∑ j 6=yi e
x>i (wk−wyi )) = u∗i (W ). 6Also known to as an “incremental proximal algorithm” (Bertsekas, 2011).
whereas in vanilla SGD it appears only on the left side. In our case θ = (u,W ) and ξt = (it, kt) with∇f(θ(t+1), ξt) = ∇fit,kt(u(t+1),W (t+1)). Although Implicit SGD has similar convergence rates to vanilla SGD, it has other properties that can make it preferable over vanilla SGD. It is known to be more robust to the learning rate (Toulis et al., 2016), which important since a good value for the learning rate is never known a priori. Another property, which is of particular interest to our problem, is that it has smaller step sizes.
Proposition 1. Consider applying Implicit SGD to optimizing f(θ) = Eξ[f(θ, ξ)] where f(θ, ξ) is m-strongly convex for all ξ. Then
‖∇f(θ(t+1), ξt)‖2 ≤ ‖∇f(θ(t), ξt)‖2 −m‖θ(t+1) − θ(t)‖2
and so the Implicit SGD step size is smaller than that of vanilla SGD.
Proof. The proof is provided in Appendix E.
The bound in Proposition 1 can be tightened for our particular problem. Unlike vanilla SGD whose step size magnitude is exponential in x>i (wk−wyi)−ui, as shown in (7), for Implicit SGD the step size is asymptotically linear in x>i (wk − wyi) − ui. This effectively guarantees that Implicit SGD cannot suffer from computational overflow.
Proposition 2. Consider the Implicit SGD algorithm where in each iteration only one datapoint i and one class k 6= yi is sampled and there is no ridge regularization. The magnitude of its step size in w is O(x>i (wk − wyi)− ui).
Proof. The proof is provided in Appendix F.2.
The difficulty in applying Implicit SGD is that in each iteration one has to compute a solution to (9). The tractability of this procedure is problem dependent. We show that computing a solution to (9) is indeed tractable for the problem considered in this paper. The details of these mechanisms are laid out in full in Appendix F.
Proposition 3. Consider the Implicit SGD algorithm where in each iteration n datapoints and m classes are sampled. Then the Implicit SGD update θ(t+1) can be computed to within accuracy in runtime O(n(n+m)(D + n log( −1))).
Proof. The proof is provided in Appendix F.3.
In Proposition 3 the log( −1) factor comes from applying a first order method to solve the strongly convex Implicit SGD update equation. It may be the case that performing this optimization is more expensive than computing the x>i wk inner products, and so each iteration of Implicit SGD may be significantly slower than that of vanilla SGD or U-max. However, in the special case of n = m = 1 we can use the bisection method to give an explicit upper bound on the optimization cost.
Proposition 4. Consider the Implicit SGD algorithm with learning rate η where in each iteration only one datapoint i and one class k 6= yi is sampled and there is no ridge regularization. Then the Implicit SGD iterate θ(t+1) can be computed to within accuracy with only two D-dimensional vector inner products and at most log2(
−1)+log2(|x>i (wk−wyi)−ui|+2ηN‖xi‖22 +log(K−1)) bisection method function evaluations.
Proof. The proof is provided in Appendix F.1
For any reasonably large dimensionD, the cost of the twoD-dimensional vector inner products will outweigh the cost of the bisection, and Implicit SGD will have roughly the same speed per iteration as vanilla SGD or U-max.
In summary, Implicit SGD is robust to the learning rate, does not have overflow issues and its updates can be computed in roughly the same time as vanilla SGD.
4 EXPERIMENTS
Two sets of experiments were conducted to assess the performance of the proposed methods. The first compares U-max and Implicit SGD to the state-of-the-art over seven real world datasets. The second investigates the difference in performance between the two double-sum formulations discussed in Section 2.3. We begin by specifying the experimental setup and then move onto the results.
4.1 EXPERIMENTAL SETUP
Data. We used the MNIST, Bibtex, Delicious, Eurlex, AmazonCat-13K, Wiki10, and Wiki-small datasets7, the properties of which are summarized in Table 1. Most of the datasets are multi-label and, as is standard practice (Titsias, 2016), we took the first label as being the true label and discarded the remaining labels. To make the computation more manageable, we truncated the number of features to be at most 10,000 and the training and test size to be at most 100,000. If, as a result of the dimension truncation, a datapoint had no non-zero features then it was discarded. The features of each dataset were normalized to have unit L2 norm. All of the datasets were pre-separated into training and test sets. We only focus on the performance on the algorithms on the training set, as the goal in this paper is to investigate how best to optimize the softmax likelihood, which is given over the training set.
Algorithms. We compared our algorithms to the state-of-the-art methods for optimizing the softmax which have runtime O(D) per iteration8. The competitors include Noise Contrastive Estimation (NCE) (Mnih & Teh, 2012), Importance Sampling (IS) (Bengio & Senécal, 2008) and One-Vs-Each (OVE) (Titsias, 2016). Note that these methods are all biased and will not converge to the optimal softmax MLE, but something close to it. For these algorithms we set n = 100,m = 5, which are standard settings9. For Implicit SGD we chose to implement the version in Proposition 4 which has n = 1,m = 1. Likewise for U-max we set n = 1,m = 1 and the threshold parameter δ = 1. The ridge regularization parameter µ was set to zero for all algorithms.
Epochs and losses. Each algorithm is run for 50 epochs on each dataset. The learning rate is decreased by a factor of 0.9 each epoch. Both the prediction error and log-loss (2) are recorded at the end of 10 evenly spaced epochs over the 50 epochs.
Learning rate. The magnitude of the gradient differs in each algorithm, due to either under- or overestimating the log-sum derivative from (2). To set a reasonable learning rate for each algorithm on
7All of the datasets were downloaded from http://manikvarma.org/downloads/XC/ XMLRepository.html, except Wiki-small which was obtained from http://lshtc.iit. demokritos.gr/.
8Raman et al. (2016) have runtime O(NKD) per epoch, which is equivalent to O(KD) per iteration. This is a factor of K slower than the methods we compare against.
9We also experimented setting n = 1,m = 1 in these methods and there was virtually no difference except the runtime was slower. For example, in Appendix G we plot the performance of NCE with n = 1,m = 1 and n = 100,m = 5 applied to the Eurlex dataset for different learning rates and there is very little difference between the two.
each dataset, we ran them on 10% of the training data with initial learning rates η = 100,±1,±2,±3. The learning rate with the best performance after 50 epochs is then used when the algorithm is applied to the full dataset. The tuned learning rates are presented in Table 2. Note that vanilla SGD requires a very small learning rate, otherwise it suffered from overflow.
4.2 RESULTS
Comparison to state-of-the-art. Plots of the performance of the algorithms on each dataset are displayed in Figure 1 with the relative performance compared to Implicit SGD given in Table 3. The Implicit SGD method has the best performance on virtually all datasets. Not only does it converge faster in the first few epochs, it also converges to the optimal MLE (unlike the biased methods that prematurely plateau). On average after 50 epochs, Implicit SGD’s log-loss is a factor of 4.29 lower than the previous state-of-the-art. The U-max algorithm also outperforms the previous state-of-theart on most datasets. U-max performs better than Implicit SGD on AmazonCat, although in general Implicit SGD has superior performance. Vanilla SGD’s performance is better than the previous state-of-the-art but worse than U-max and Implicit SGD. The difference in performance between vanilla SGD and U-max can largely be explained by vanilla SGD requiring a smaller learning rate to avoid computational overflow.
The sensitivity of each method to the initial learning rate can be seen in Appendix G, where the results of running each method on the Eurlex dataset with learning rates η = 100,±1,±2,±3 is presented. The results are consistent with those in Figure 1, with Implicit SGD having the best performance for most learning rate settings. For learning rates η = 103,4 the U-max log-loss is extremely large. This can be explained by Theorem 1, which does not guarantee convergence for U-max if the learning rate is too high.
Comparison of double-sum formulations. Figure 2 illustrates the performance on the Eurlex dataset of U-max using the proposed double-sum in (6) compared to U-max using the double-sum of Raman et al. (2016) in (8). The proposed double-sum clearly outperforms for all10 learning rates η = 100,±1,±2,−3,−4, with its 50th-epoch log-loss being 3.08 times lower on average. This supports the argument from Section 2.3 that SGD methods applied to the proposed double-sum have smaller magnitude gradients and converge faster.
5 CONCLUSION
In this paper we have presented the U-max and Implicit SGD algorithms for optimizing the softmax likelihood. These are the first algorithms that require only O(D) computation per iteration (without extra work at the end of each epoch) that converge to the optimal softmax MLE. Implicit SGD can be efficiently implemented and clearly out-performs the previous state-of-the-art on seven real world datasets. The result is a new method that enables optimizing the softmax for extremely large number of samples and classes.
So far Implicit SGD has only been applied to the simple softmax, but could also be applied to any neural network where the final layer is the softmax. Applying Implicit SGD to word2vec type models, which can be viewed as softmaxes where both x and w are parameters to be fit, might be particularly fruitful.
10The learning rates η = 103,4 are not displayed in the Figure 2 for visualization purposes. It had similar behavior as η = 102.
A PROOF OF VARIABLE BOUNDS AND STRONG CONVEXITY
We first establish that the optimal values of u and W are bounded. Next, we show that within these bounds the objective is strongly convex and its gradients are bounded. Lemma 1 ((Raman et al., 2016)). The optimal value of W is bounded as ‖W ∗‖22 ≤ B2W where B2W = 2 µN log(K).
Proof.
−N log(K) = L(0) ≤ L(W ∗) ≤ −µ 2 ‖W ∗‖22
Rearranging gives the desired result.
Lemma 2. The optimal value of ui is bounded as u∗i ≤ Bu where Bu = log(1 + (K − 1)e2BxBw) and Bx = maxi{‖xi‖2}
Proof.
u∗i = log(1 + ∑ k 6=yi ex > i (wk−wyi ))
≤ log(1 + ∑ k 6=yi e‖xi‖2(‖wk‖2+‖wyi‖2))
≤ log(1 + ∑ k 6=yi e2BxBw)
= log(1 + (K − 1)e2BxBw)
Lemma 3. If ‖W‖22 ≤ B2W and ui ≤ Bu then f(u,W ) is strongly convex with convexity constant greater than or equal to min{exp(−Bu), µ}.
Proof. Let us rewrite f as
f(u,W ) = N∑ i=1 ui + e −ui + ∑ k 6=yi ex > i (wk−wyi )−ui + µ 2 ‖W‖22
= N∑ i=1 a>i θ + e −ui + ∑ k 6=yi eb > ikθ + µ 2 ‖W‖22.
where θ = (u>, w>1 , ..., w > k ) ∈ RN+KD with ai and bik being appropriately defined. The Hessian of f is
∇2f(θ) = N∑ i=1 e−uieie > i + ∑ k 6=yi eb > ikθbikb > ik + µ · diag{0N , 1KD}
where ei is the ith canonical basis vector, 0N is an N -dimensional vector of zeros and 1KD is a KD-dimensional vector of ones. It follows that
∇2f(θ) I ·min{ min 0≤u≤Bu {e−ui}, µ}
= I ·min{exp(−Bu), µ} 0.
Lemma 4. If ‖W‖22 ≤ B2W and ui ≤ Bu then the 2-norm of both the gradient of f and each stochastic gradient fik are bounded by
Bf = N max{1, eBu − 1}+ 2(NeBuBx + µmax k {βk}BW ).
Proof. By Jensen’s inequality max
‖W‖22≤B2W ,0≤u≤Bu ‖∇f(u,W )‖2 = max ‖W‖22≤B2W ,0≤u≤Bu ‖∇Eikfik(u,W )‖2
≤ max ‖W‖22≤B2W ,0≤u≤Bu Eik‖∇fik(u,W )‖2
≤ max ‖W‖22≤B2W ,0≤u≤Bu max ik ‖∇fik(u,W )‖2.
Using the results from Lemmas 1 and 2 and the definition of fik from (6), ‖∇uifik(u,W )‖2 = ‖N ( 1− e−ui − (K − 1)ex > i (wk−wyi )−ui) ) ‖2
= N |1− e−ui(1 + (K − 1)ex > i (wk−wyi ))|
≤ N max{1, (1 + (K − 1)e‖xi‖2(‖wk‖2+‖wyi‖2))− 1} ≤ N max{1, eBu − 1}
and for j indexing either the sampled class k 6= yi or the true label yi, ‖∇wjfik(u,W )‖2 = ‖ ±N(K − 1)ex > i (wk−wyi )−uixi + µβjwj‖2
≤ N(K − 1)e‖xi‖2(‖wk‖2+‖wyi‖2)‖xi‖2 + µβj‖wj‖2 ≤ NeBuBx + µmax
k {βk}BW .
Letting Bf = N max{1, eBu − 1}+ 2(NeBuBx + µmax
k {βk}BW )
we have ‖∇fik(u,W )‖2 ≤ ‖∇uifik(u,W )‖2 + ‖∇wkfik(u,W )‖2 + ‖∇wyi fik(u,W )‖2 = Bf . In conclusion: max
‖W‖22≤B2W ,0≤u≤Bu ‖∇f(u,W )‖2 ≤ max ‖W‖22≤B2W ,ui≤Bu, max ik ‖∇fik(u,W )‖2 ≤ Bf .
B STOCHASTIC COMPOSITION OPTIMIZATION
We can write the equation for L(W ) from (3) as (where we have set µ = 0 for notational simplicity),
L(W ) = − N∑ i=1 log(1 + ∑ k 6=yi ex > i (wk−wyi ))
= Ei[hi(Ek[gk(W )])] where i ∼ unif({1, ..., N}), k ∼ unif({1, ...,K}), hi(v) ∈ R, gk(W ) ∈ RN and
hi(v) = −N log(1 + e>i v)
[gk(W )]i =
{ Kex > i (wk−wyi ) if k 6= yi
0 otherwise .
Here e>i v = vi ∈ R is a variable that is explicitly kept track of with vi ≈ Ek[gk(W )]i =∑ k 6=yi e
x>i (wk−wyi ) (with exact equality in the limit as t → ∞). Clearly vi in stochastic composition optimization has a similar role as ui has in our formulation for f in (5).
If i, k are sampled with k 6= yi in stochastic composition optimization then the updates are of the form (Wang et al., 2016)
wyi = wyi + ηtNK ex > i (zk−zyi )
1 + vi xi
wk = wk − ηtNK ex > i (zk−zyi )
1 + vi xi,
where zk is a smoothed value of wk. These updates have the same numerical instability issues as vanilla SGD on f in (5): it is possible that e x>i zk
1+vi 1 where ideally we should have 0 ≤ e
x>i zk 1+vi ≤ 1.
C U-MAX PSEUDOCODE
Algorithm 1: U-max Input : Data D = {(yi, xi) : yi ∈ {1, . . . ,K}, xi ∈ Rd}Ni=1, number of classes K, number of
datapoints N , learning rate ηt, class sampling probability βk = Nnk+(N−nk)(K−1) , threshold parameter δ > 0, bound BW on W such that ‖W‖2 ≤ BW and bound Bu on u such that ui ≤ Bu for i = 1, ..., N
Output: W
1 Initialize 2 for k = 1 to K do 3 wk ← 0 4 end 5 for i = 1 to N do 6 ui ← log(K) 7 end
8 Run SGD 9 for t = 1 to T do
10 Sample indices 11 i ∼ unif({1, ..., N}) 12 k ∼ unif({1, ...,K} − {yi})
13 Increase ui 14 if ui < log(1 + ex > i (wk−wyi ))− δ then 15 ui ← log(1 + ex > i (wk−wyi ))
16 SGD step 17 wk ← wk − ηt[N(K − 1)ex > i (wk−wyi )−uixi + µβkwk] 18 wyi ← wyi − ηt[−N(K − 1)ex > i (wk−wyi )−uixi + µβyiwyi ] 19 ui ← ui − ηt[N(1− e−ui − (K − 1)ex > i (wk−wyi )−ui)]
20 Projection 21 wk ← wk ·min{1, BW /‖wk‖2} 22 wyi ← wyi ·min{1, BW /‖wyi‖2} 23 ui ← max{0,min{Bu, ui}} 24 end
D PROOF OF CONVERGENCE OF U-MAX METHOD
In this section we will prove the claim made in Theorem 1, that U-max converges to the softmax optimum. Before proving the theorem, we will need a lemma.
Lemma 5. For any δ > 0, if ui ≤ log(1+ex > i (wk−wyi ))−δ then setting ui = log(1+ex > i (wk−wyi )) decreases f(u,W ) by at least δ2/2.
Proof. As in Lemma 3, let θ = (u>, w>1 , ..., w > k ) ∈ RN+KD. Then setting ui = log(1 + ex > i (wk−wyi )) is equivalent to setting θ = θ + ∆ei where ei is the ith canonical basis vector and ∆ = log(1 + ex > i (wk−wyi ))− ui ≥ δ. By a second order Taylor series expansion
f(θ)− f(θ + ∆ei) ≥ ∇f(θ + ∆ei)>ei∆ + ∆2
2 e>i ∇2f(θ + λ∆ei)ei (10)
for some λ ∈ [0, 1]. Since the optimal value of ui for a given value of W is u∗i (W ) = log(1 +∑ k 6=yi e x>i (wk−wyi )) ≥ log(1+ex>i (wk−wyi )), we must have∇f(θ+∆ei)>ei ≤ 0. From Lemma 3
we also know that e>i ∇2f(θ + λ∆ei)ei = exp(−(ui + λ∆)) + ∑ k 6=yi ex > i (wk−wyi )−(ui+λ∆)
= exp(−λ∆)e−ui(1 + ∑ k 6=yi ex > i (wk−wyi )) = exp(−λ∆) exp(−(log(1 + ex > i (wk−wyi ))−∆))(1 +
∑ k 6=yi ex > i (wk−wyi ))
≥ exp(∆− λ∆) ≥ exp(∆−∆) = 1.
Putting in bounds for the gradient and Hessian terms in (10),
f(θ)− f(θ + ∆ei) ≥ ∆2 2 ≥ δ 2 2 .
Now we are in a position to prove Theorem 1.
Proof of Theorem 1. Let θ(t) = (u(t),W (t)) ∈ Θ denote the value of the tth iterate. Here Θ = {θ : ‖W‖22 ≤ B2W , ui ≤ Bu} is a convex set containing the optimal value of f(θ).
Let π(δ)i (θ) denote the operation of setting ui = log(1 + e x>i (wk−wyi )) if ui ≤ log(1 + ex > i (wk−wyi )) − δ. If indices i, k are sampled for the stochastic gradient and ui ≤ log(1 + ex > i (wk−wyi ))− δ, then the value of f at the t+ 1st iterate is bounded as
f(θ(t+1)) = f(πi(θ (t))− ηt∇fik(πi(θ(t))))
≤ f(πi(θ(t))) + max θ∈Θ ‖ηt∇fik(πi(θ))‖2 max θ∈Θ ‖∇f(θ)‖2
≤ f(πi(θ(t))) + ηtB2f ≤ f(θ(t))− δ2/2 + ηtB2f ≤ f(θ(t) − ηt∇fik(θ(t)))− δ2/2 + 2ηtB2f ≤ f(θ(t) − ηt∇fik(θ(t))),
since ηt ≤ δ2/(4B2f ) by assumption. Alternatively if ui ≥ log(1 + ex > i (wk−wyi ))− δ then
f(θ(t+1)) = f(πi(θ (t))− ηt∇fik(πi(θ(t))))
= f(θ(t) − ηt∇fik(θ(t))).
Either way f(θ(t+1)) ≤ f(θ(t) − ηt∇fik(θ(t))). Taking expectations with respect to i, k,
Eik[f(θ(t+1))] ≤ Eik[f(θ(t) − ηt∇fik(θ(t)))].
Finally let P denote the projection of θ onto Θ. Since Θ is a convex set containing the optimum we have f(P (θ)) ≤ f(θ) for any θ, and so
Eik[f(P (θ(t+1)))] ≤ Eik[f(θ(t) − ηt∇fik(θ(t)))],
which shows that the rate of convergence in expectation of U-max is at least as fast as that of standard SGD.
E PROOF OF GENERAL IMPLICIT SGD GRADIENT BOUND
Proof of Theorem 2. Let f(θ, ξ) be m-strongly convex for all ξ. The vanilla SGD step size is ηt‖∇f(θ(t), ξt)‖2 where ηt is the learning rate for the tth iteration. The Implicit SGD step size is ηt‖∇f(θ(t+1), ξt)‖2 where θ(t+1) satisfies θ(t+1) = θ(t) − ηt∇f(θ(t+1), ξt). Rearranging, ∇f(θ(t+1), ξt) = (θ(t)−θ(t+1))/ηt and so it must be the case that∇f(θ(t+1), ξt)>(θ(t)−θ(t+1)) = ‖∇f(θ(t+1), ξt)‖2‖θ(t) − θ(t+1)‖2. Our desired result follows:
‖∇f(θ(t), ξt)‖2 ≥ ∇f(θ(t))>(θ(t) − θ(t+1)) ‖θ(t) − θ(t+1)‖2
≥ ∇f(θ (t+1))>(θ(t) − θ(t+1)) +m‖θ(t) − θ(t+1)‖22
‖θ(t) − θ(t+1)‖2
= ‖∇f(θ(t+1))‖2‖θ(t) − θ(t+1)‖2 +m‖θ(t) − θ(t+1)‖22 ‖θ(t) − θ(t+1)‖2 = ‖∇f(θ(t+1))‖2 +m‖θ(t) − θ(t+1)‖2
where the first inequality is by Cauchy-Schwarz and the second inequality by strong convexity.
F UPDATE EQUATIONS FOR IMPLICIT SGD
In this section we will derive the updates for Implicit SGD. We will first consider the simplest case where only one datapoint (xi, yi) and a single class is sampled in each iteration with no regularizer. Then we will derive the more complicated update for when there are multiple datapoints and sampled classes with a regularizer.
F.1 SINGLE DATAPOINT, SINGLE CLASS, NO REGULARIZER
Equation (6) for the stochastic gradient for a single datapoint and single class with µ = 0 is
fik(u,W ) = N(ui + e −ui + (K − 1)ex > i (wk−wyi )−ui).
The Implicit SGD update corresponds to finding the variables optimizing
min u,W
{ 2ηfik(u,W ) + ‖u− ũ‖22 + ‖W − W̃‖22 } ,
where η is the learning rate and the tilde refers to the value of the old iterate (Toulis et al., 2016, Eq. 6). Since fik is only a function of ui, wk, wyi the optimization reduces to
min ui,wk,wyi
{ 2ηfik(ui, wk, wyi) + (ui − ũi)2 + ‖wyi − w̃yi‖22 + ‖wk − w̃k‖22 } = min ui,wk,wyi { 2ηN(ui + e −ui + (K − 1)ex > i (wk−wyi )−ui)
+ (ui − ũi)2 + ‖wyi − w̃yi‖22 + ‖wk − w̃k‖22 } .
The optimal value of wk, wyi must deviate from the old value w̃k, w̃yi in the direction of xi. Furthermore we can observe that the deviation of wk must be exactly opposite that of wyi , that is:
wyi = w̃yi + a xi
2‖xi‖22 wk = w̃k − a
xi 2‖xi‖22
(11)
for some a ≥ 0. The optimization problem reduces to
min ui,a≥0
{ 2ηN(ui + e −ui + (K − 1)ex > i (w̃k−w̃yi )e−a−ui) + (ui − ũi)2 + a2 1
2‖xi‖22
} . (12)
We’ll approach this optimization problem by first solving for a as a function of ui and then optimize over ui. Once the optimal value of ui has been found, we can calculate the corresponding optimal value of a. Finally, substituting a into (11) will give us our updated value of W .
Solving for a
We solve for a by setting its derivative equal to zero in (12)
0 = ∂a { 2ηN(ui + e −ui + (K − 1)ex > i (w̃k−w̃yi )e−a−ui) + (ui − ũi)2 + a2 1
2‖xi‖22 } = −2ηN(K − 1)ex > i (w̃k−w̃yi )−uie−a + a 1
‖xi‖22 ⇔ aea = 2ηN(K − 1)‖xi‖22ex > i (w̃k−w̃yi )−ui . (13)
The solution for a can be written in terms of the principle branch of the Lambert W function P ,
a(ui) = P (2ηN(K − 1)‖xi‖22ex > i (w̃k−w̃yi )−ui)
= P (ex > i (w̃k−w̃yi )−ui+log(2ηN(K−1)‖xi‖ 2 2)). (14)
Substituting the solution to a(ui) into (12), we now only need minimize over ui:
min ui
{ 2ηNui+ 2ηNe −ui + 2ηN(K − 1)ex > i (w̃k−w̃yi )e−a(ui)−ui + (ui − ũi)2+ a(ui)2 1
2‖xi‖22 } = min
ui
{ 2ηNui + 2ηNe −ui + a(ui)‖xi‖−22 + (ui − ũi)2 + a(ui)2 1
2‖xi‖22
} (15)
where we used the fact that e−P (z) = P (z)/z. The derivative with respect to ui in (15) is
∂ui { 2ηNui + 2ηNe −ui + a(ui)‖xi‖−22 + (ui − ũi)2 + a(ui)2 1
2‖xi‖22 } = 2ηN − 2ηNe−ui + ∂uia(ui)‖xi‖−22 + 2(ui − ũi) + 2a(ui)∂uia(ui) 1
2‖xi‖22
= 2ηN − 2ηNe−ui − a(ui) 1 + a(ui) ‖xi‖−22 + 2(ui − ũi)− a(ui)
2
(1 + a(ui))‖xi‖22 (16)
where to calculate ∂uia(ui) we used the fact that ∂zP (z) = P (z) z(1+P (z)) and so
∂uia(ui) = − a(ui)
ex > i (w̃k−w̃yi )−ui+log(2ηN(K−1)‖xi‖ 2 2)(1 + a(ui))
ex > i (w̃k−w̃yi )−ui+log(2ηN(K−1)‖xi‖ 2 2)
= − a(ui) 1 + a(ui) .
Bisection method for ui
We can solve for ui using the bisection method. Below we show how to calculate the initial lower and upper bounds of the bisection interval and prove that the size of the interval is bounded (which ensures fast convergence).
Start by calculating the derivative in (16) at ui = ũi. If the derivative is negative then the optimal ui is lower bounded by ũi. An upper bound is provided by
ui = argmin ui
{ 2ηN(ui + e −ui + (K − 1)ex > i (w̃k−w̃yi )e−a(ui)−ui) + (ui − ũi)2 + a(ui) 2
2‖xi‖22 } ≤ argmin
ui
{ 2ηN(ui + e −ui + (K − 1)ex > i (w̃k−w̃yi )e−ui) + (ui − ũi)2 } ≤ argmin
ui
{ 2ηN(ui + e −ui + (K − 1)ex > i (w̃k−w̃yi )e−ui) } = log(1 + (K − 1)ex > i (w̃k−w̃yi )).
In the first inequality we set a(ui) = 0, since by the envelop theorem the gradient of ui is monotonically increasing in a. In the second inequality we used the assumption that ui is lower bounded by ũi. Thus if the derivative in (16) is negative at ui = ũi then ũi ≤ ui ≤ log(1+(K−1)ex > i (w̃k−w̃yi )). If (K− 1)ex>i (w̃k−w̃yi ) ≤ 1 then the size of the interval must be less than log(2), since ũi ≥ 0. Otherwise the gap must be at most log(2(K−1)ex>i (w̃k−w̃yi ))−ũi = log(2(K−1))+x>i (w̃k−w̃yi)−ũi. Either way, the gap is upper bounded by log(2(K − 1)) + |x>i (w̃k − w̃yi)− ũi|. Now let us consider if the derivative in (16) is positive at ui = ũi. Then ui is upper bounded by ũi. Denoting a′ as the optimal value of a, we can lower bound ui using (12)
ui = argmin ui
{ 2ηN(ui + e −ui + (K − 1)ex > i (w̃k−w̃yi )e−a ′−ui) + (ui − ũi)2 }
≥ argmin ui
{ ui + e −ui + (K − 1)ex > i (w̃k−w̃yi )e−a ′−ui }
= log(1 + (K − 1) exp(x>i (w̃k − w̃yi)− a′)) ≥ log(K − 1) + x>i (w̃k − w̃yi)− a′ (17)
where the first inequality comes dropping the (ui − ũi)2 term due to the assumption that ui < ũi. Recall (13),
a′ea ′ = 2ηN(K − 1)‖xi‖22ex > i (w̃k−w̃yi )−ui .
The solution for a′ is strictly monotonically increasing as a function of the right side of the equation. Thus replacing the right side with an upper bound on its value results in an upper bound on a′. Substituting the bound for ui,
a′ ≤ min{a : aea = 2ηN(K − 1)‖xi‖22ex > i (w̃k−w̃yi )−(log(K−1)+x > i (w̃k−w̃yi )−a)}
= min{a : a = 2ηN‖xi‖22} = 2ηN‖xi‖22. (18)
Substituting this bound for a′ into (17) yields
ui ≥ log(K − 1) + x>i (w̃k − w̃yi)− 2ηN‖xi‖22. Thus if the derivative in (16) is postive at ui = ũi then log(K− 1) +x>i (w̃k− w̃yi)− 2ηN‖xi‖22 ≤ ui ≤ ũi. The gap between the upper and lower bound is ũi−x>i (w̃k−w̃yi)+2ηN‖xi‖22−log(K−1). In summary, for both cases of the sign of the derivative in (16) at ui = ũi we are able to calculate a lower and upper bound on the optimal value of ui such that the gap between the bounds is at most |ũi − x>i (w̃k − w̃yi)| + 2ηN‖xi‖22 + log(K − 1). This allows us to perform the bisection method where for > 0 level accuracy we require only log2(
−1)+log2(|ũi−x>i (w̃k−w̃yi)|+2ηN‖xi‖22+ log(K − 1)) function evaluations.
F.2 BOUND ON STEP SIZE
Here we will prove that the step size magnitude of Implicit SGD with a single datapoint and sampled class with respect to w is bounded as O(x>i (w̃k − w̃yi)− ũi). We will do so by considering the two cases u′i ≥ ũi and u′i < ũi separately, where u′i denotes the optimal value of ui in the Implicit SGD update and ũi is its value at the previous iterate.
Case: u′i ≥ ũi
Let a′ denote the optimal value of a in the Implicit SGD update. From (14)
a′ = a(u′i)
= P (ex > i (w̃k−w̃yi )−u ′ i+log(2ηN(K−1)‖xi‖ 2 2))
= P (ex > i (w̃k−w̃yi )−ũi+log(2ηN(K−1)‖xi‖ 2 2)).
Now using the fact that P (z) = O(log(z)),
a′ = O(x>i (w̃k − w̃yi)− ũi + log(2ηN(K − 1)‖xi‖22)) = O(x>i (w̃k − w̃yi)− ũi)
Case: u′i < ũi
If u′i < ũi then we can lower bound a ′ from (18) as
a′ ≤ 2ηN‖xi‖22.
Combining cases
Putting together the two cases,
a′ = O(max{x>i (w̃k − w̃yi)− ũi, 2ηN‖xi‖22}) = O(x>i (w̃k − w̃yi)− ũi).
The actual step size in w is ±a xi 2‖xi‖22 . Since a is O(x>i (w̃k − w̃yi)− ũi), the step size magnitude is also O(x>i (w̃k − w̃yi)− ũi).
F.3 MULTIPLE DATAPOINTS, MULTIPLE CLASSES
The Implicit SGD update when there are multiple datapoints, multiple classes, with a regularizer is similar to the singe datapoint, singe class, no regularizer case described above. However, there are a few significant differences. Firstly, we will require some pre-computation to find a low-dimensional representation of the x values in each mini-batch. Secondly, we will integrate out ui for each datapoint (not wk). And thirdly, since the dimensionality of the simplified optimization problem is large, we’ll require first order or quasi-Newton methods to find the optimal solution.
F.3.1 DEFINING THE MINI-BATCH
The first step is to define our mini-batches of size n. We will do this by partitioning the datapoint indices into sets S1, ..., SJ with Sj = {j` : ` = 1, ..., n} for j = 1, ..., bN/nc, SJ = {J` : ` = 1, ..., N mod n}, Si ∩ Sj = ∅ and ∪Jj=1Sj = {1, ..., N}.
Next we define the set of classes Cj which can be sampled for the jth mini-batch. The set Cj is defined to be all sets of m distinct classes that are not equal to any of the labels y for points in the mini-batch, that is, Cj = {(k1, ..., km) : ki ∈ {1, ...,K}, ki 6= k` ∀` ∈ {1, ...,m} − {i}, ki 6= y` ∀` ∈ Sj}. Now we can write down our objective from (5) in terms of an expectation of functions corresponding to our mini-batches:
f(u,W ) = E[fj,C(u,W )]
where j is sampled with probability pj = |Sj |/N and C is sampled uniformly from Cj and
fj,C(u,W ) = p −1 j ∑ i∈Sj ui + e−ui + ∑ k∈Sj−{i} ex > i (wk−wyi )−ui + K − n m ∑ k∈C ex > i (wk−wyi )−ui + µ
2 ∑ k∈C∪Sj βk‖wk‖22.
The value of the regularizing constant βk is such that E[I[k ∈ C ∪ Sj ]βk] = 1, which requires that
β−1k = 1− 1
J J∑ j=1 I[k 6= Sj ](1− m K − |Sj | ).
F.3.2 SIMPLIFYING THE IMPLICIT SGD UPDATE EQUATION
The Implicit SGD update corresponds to solving
min u,W
{ 2ηfj,C(u,W ) + ‖u− ũ‖22 + ‖W − W̃‖22 } ,
where η is the learning rate and the tilde refers to the value of the old iterate (Toulis et al., 2016, Eq. 6). Since fj,C is only a function of uSj = {ui : i ∈ Sj} and Wj,C = {wk : k ∈ Sj ∪ C} the optimization reduces to
min uSj ,Wj,C
{ 2ηfj,C(uSj ,Wj,C) + ‖uSj − ũSj‖22 + ‖Wj,C − W̃j,C‖22 } .
The next step is to analytically minimize the uSj terms. The optimization problem in (21) decomposes into a sum of separate optimization problems in ui for i ∈ Sj ,
min ui
{ 2ηp−1j (ui + e −uidi) + (ui − ũi)2 }
where
di(Wj,C) = 1 + ∑
k∈Sj−{i}
ex > i (wk−wyi ) + K − n m ∑ k∈C ex > i (wk−wyi ).
Setting the derivative of ui equal to zero yields the solution
ui(Wj,C) = ũi − ηp−1j + P (ηp −1 j di(Wj,C) exp(ηp −1 j − ũi))
where P is the principle branch of the Lambert W function. Substituting this solution into our optimization problem and simplifying yields
min Wj,C {∑ i∈Sj (1 + P (ηp−1j di(Wj,C) exp(ηp −1 j − ũi))) 2 + ‖Wj,C − W̃j,C‖22 + µ 2 ∑ k∈C∪Sj βk‖wk‖22 } ,
(19)
where we have used the identity e−P (z) = P (z)/z. We can decompose (19) into two parts by splitting Wj,C = W ‖ j,C + W ⊥ j,C , its components parallel and perpendicular to the span of {xi : i ∈ Sj} respectively. Since the leading term in (19) only depends on W ‖j,C , the two resulting sub-problems are
min W ‖ j,C {∑ i∈Sj (1 + P (ηp−1j di(W ‖ j,C) exp(ηp −1 j − ũi))) 2 + ‖W ‖j,C − W̃ ‖ j,C‖ 2 2 + µ 2 ∑ k∈C∪Sj βk‖w‖k‖ 2 2 } ,
min W⊥j,C
{ ‖W⊥j,C − W̃⊥j,C‖22 + µ
2 ∑ k∈C∪Sj βk‖w⊥k ‖22 }
(20)
Let us focus on the perpendicular component first. Simple calculus yields the optimal value w⊥k = 1
1+µβk/2 w̃⊥k for k ∈ Sj ∪ C.
Moving onto the parallel component, let the span of {xi : i ∈ Sj} have an orthonormal basis11 Vj = (vj1, ..., vjn) ∈ RD×n with xi = Vjbi for some bi ∈ Rn. With this basis we can write w ‖ k = w̃ ‖ k + Vjak for ak ∈ Rn which reduces the parallel component optimization problem to12
min Aj,C {∑ i∈Sj (1 + P (zijC(Aj,C))) 2 + ∑ k∈Sj∪C (1 + µβk 2 )‖ak‖22 + µβkw̃>k Vjak } , (21)
where Aj,C = {ak : k ∈ Sj ∪ C} ∈ R(n+m)×n and
zijC(Aj,C) = ηp −1 j exp(ηp −1 j ) ( exp(−ũi) + ∑ k∈Sj−{i} ex > i (w̃k−w̃yi )−ũieb > i (ak−ayi )
+ K − n m ∑ k∈C ex > i (w̃k−w̃yi )−ũieb > i (ak−ayi ) ) .
11We have assumed here that dim(span({xi : i ∈ Sj})) = n, which will be most often the case. If the dimension of the span is lower than n then let Vj be of dimension D × dim(span({xi : i ∈ Sj})).
12Note that we have used w̃k instead of w̃ ‖ k in writing the parallel component optimization problem. This does not make a difference as w̃k always appears as an inner product with a vector in the span of {xi : i ∈ Sj}.
The eb > i (ak−ayi ) factors come from
x>i wk = x > i (w̃ ‖ k + a > k Vj)
= x>i w̃k + (Vjbi) >Vjak = x>i w̃k + b > i V > j Vjak = x>i w̃k + b > i ak,
since Vj is an orthonormal basis.
F.3.3 OPTIMIZING THE IMPLICIT SGD UPDATE EQUATION
To optimize (21) we need to be able to take the derivative:
∇a` ∑ i∈Sj (1 + P (zijC(Aj,C))) 2 + ∑ k∈Sj∪C (1 + µβk 2 )‖ak‖22 + µβkw̃>k Vjak = ∑ i∈Sj 2(1 + P (zijC(Aj,C)))∂zijC(Aj,C)P (zijC(Aj,C))∇a`zijC(Aj,C)
+ (2 + µβ`)a` + µβ`w̃ > ` Vj
= ∑ i∈Sj 2(1 + P (zijC(Aj,C))) P (zijC(Aj,C)) zijC(Aj,C)(1 + P (zijC(Aj,C))) ∇a`zijC(Aj,C)
+ (2 + µβ`)a` + µβ`w̃ > ` Vj
= ∑ i∈Sj 2 P (zijC(Aj,C)) zijC(Aj,C) ∇a`zijC(Aj,C) + (2 + µβ`)a` + µβ`w̃>` Vj
= ∑ i∈Sj 2e−P (zijC(Aj,C))∇a`zijC(Aj,C) + (2 + µβ`)a` + µβ`w̃>` Vj
where we used that ∂zP (z) = P (z) z(1+P (z)) and e −P (z) = P (z)/z. To complete the calculation of the derivate we need,
∇a`zijC(Aj,C) = ∇a`ηp −1 j exp(ηp −1 j ) ( exp(−ũi) + ∑ k∈Sj−{i} ex > i (w̃`−w̃yi )−ũieb > i (a`−ayi )
+ K − n m ∑ k∈C ex > i (w̃`−w̃yi )−ũieb > i (a`−ayi ) ) = ηp−1j exp(ηp −1 j )bi
· ( I[` ∈ Sj − {i}]ex > i (w̃`−w̃yi )−ũieb > i (a`−ayi )
+ I[` ∈ C]K − n m ex > i (w̃`−w̃yi )−ũieb > i (a`−ayi )
− I[` = yi] ( ∑ k∈Sj−{i} ex > i (w̃`−w̃yi )−ũieb > i (a`−ayi )
+ K − n m ∑ k∈C ex > i (w̃`−w̃yi )−ũieb > i (a`−ayi ) )) .
In order to calculate the full derivate with respect to Aj,C we need to calculate b>i ak for all i ∈ Sj and k ∈ Sj ∪ C. This is a total of n(n + m) inner products of n-dimensional vectors, costing O(n2(n + m)). To find the optimum of (21) we can use any optimization procedure that only uses gradients. Since (21) is strongly convex, standard first order methods can solve to accuracy in O(log( −1)) iterations (Boyd & Vandenberghe, 2004, Sec. 9.3). Thus once we can calculate all of the terms in (21), we can solve it to accuracy in runtime O(n2(n+m) log( −1)).
Once we have solved for Aj,C , we can reconstruct the optimal solution for the parallel component of wk as w ‖ k = w̃ ‖ k + Vjak. Recall that the solution to the perpendicular component is w ⊥ k =
1 1+µβk/2 w̃⊥k . Thus our optimal solution is wk = w̃ ‖ k + Vjak + 1 1+µβk/2 w̃⊥k .
If the features xi are sparse, then we’d prefer to do a sparse update to w, saving computation time. We can achieve this by letting
wk = γk · rk
where γk is a scalar and rk a vector. Updating wk = w̃ ‖ k + Vjak + 1 1+µβk/2 w̃⊥k is equivalent to
γk = γ̃k · 1
1 + µβk/2
rk = r̃k + µβk/2 · r̃‖k + γ̃ −1 k (1 + µβk/2) · Vjak.
Since we only update rk along the span of {xi : i ∈ Sj}, its update is sparse.
F.3.4 RUNTIME
There are two major tasks in calculating the terms in (21). The first is to calculate x>i w̃k for i ∈ Sj and k ∈ Sj ∪ C. There are a total of n(n + m) inner products of D-dimensional vectors, costing O(n(n + m)D). The other task is to find the orthonormal basis Vj of {xi : i ∈ Sj}, which can be achieved using the Gram-Schmidt process in O(n2D). We’ll assume that {Vj : j = 1, ..., J} is computed only once as a pre-processing step when defining the mini-batches. It is exactly because calculating {Vj : j = 1, ..., J} is expensive that we have fixed mini-batches that do not change during the optimization routine.
Adding the cost of calculating the x>i w̃k inner products to the costing of optimizing (21) leads to the claim that solve the Implicit SGD update formula to accuracy in runtime O(n(n+m)D+n2(n+ m) log( −1)) = O(n(n+m)(D + n log( −1))).
F.3.5 INITIALIZING THE IMPLICIT SGD OPTIMIZER
As was the case in Section F.1, it is important to initialize the optimization procedure at a point where the gradient is relatively small and can be computed without numerical issues. These numerical issues arise when an exponent x>i (w̃k − w̃yi) − ũi + b>i (ak − ayi) 0. To ensure that this does not occur for our initial point, we can solve the following linear problem,13
R = min Aj,C ∑ k∈C∪Sj ‖ak‖1
s.t. x>i (w̃k − w̃yi)− ũi + b>i (ak − ayi) ≤ 0 ∀i ∈ Sj , k ∈ C ∪ Sj (22)
Note that if k = yi then the constraint 0 ≥ x>i (w̃k−w̃yi)−ũi+b>i (ak−ayi) = −ũi is automatically fulfilled since ũi ≥ 0. Also observed that setting ak = −V >j w̃k satisfies all of the constraints, and so
R ≤ ∑
k∈C∪Sj
‖V >j w̃k‖1 ≤ (n+m) max k∈C∪Sj ‖V >j w̃k‖1.
We can use the solution to (22) to gives us an upper bound on (21). Consider the optimal value A
(R) j,C of the linear program in (22) with the value of the minimum being R. Since A (R) j,C satisfies
the constrain in (22) we have zijC(A (R) j,C ) ≤ Kηp −1 j exp(ηp −1 j ). Since P (z) is a monotonically increasing function that is non-negative for z ≥ 0 we also have (1 + P (zijC(A(R)j,C )))2 ≥ (1 + P (Kηp−1j exp(ηp −1 j ))) 2. Turning to the norms, we can use the fact that ‖a‖2 ≤ ‖a‖1 for any
13Instead bounding the constraints on the right with 0, we could also have used any small positive number, like 5.
vector a to bound∑ k∈Sj∪C (1 + µβk 2 )‖ak‖22 + µβkw̃>k Vjak
≤ ∑
k∈Sj∪C
(1 + µβk
2 )‖ak‖21 + µβk‖w̃>k Vj‖1‖ak‖1
≤ (
1 + µ · max k∈Sj∪C {βk}/2 ) ∑ k∈Sj∪C ‖ak‖21 + µ max k∈Sj∪C {βk‖w̃>k Vj‖1} ∑ k∈Sj∪C ‖ak‖1
≤ (
1 + µ · max k∈Sj∪C
{βk}/2 ) R2 + µ max
k∈Sj∪C {βk} max k∈Sj∪C {‖w̃>k Vj‖1}R
≤ (
1 + µ · max k∈Sj∪C
{βk}/2 )(
(n+m) max k∈C∪Sj
‖V >j w̃k‖1 )2
+ µ max k∈Sj∪C {βk} max k∈Sj∪C
{‖w̃>k Vj‖1} (
(n+m) max k∈C∪Sj
‖V >j w̃k‖1 )
≤ (1 + µ · max k∈Sj∪C {βk})(n+m)2 max k∈C∪Sj ‖V >j w̃k‖21
≤ (1 + µ · max k∈Sj∪C {βk})(n+m)2 max k∈C∪Sj ‖w̃k‖21.
Putting the bounds together we have that the optimal value of (21) is upper bounded by its value at the solution to (22), which in turn is upper bounded by
n(1 + P (Kηp−1j exp(ηp −1 j ))) 2 + (1 + µ · max k∈Sj∪C {βk})(n+m)2 max k∈C∪Sj ‖w̃k‖21.
This bound is guarantees that our initial iterate will be numerically stable.
G LEARNING RATE PREDICTION AND LOSS
Here we present the results of using different learning rates for each algorithm applied to the Eurlex dataset. In addition to the Implicit SGD, NCE, IS, OVE and U-max algorithms, we also provide results for NCE with n = 1,m = 1, denoted as NCE (1,1) . NCE and NCE (1,1) have near identical performance. | 1. What is the focus and contribution of the paper regarding minimizing softmax with many classes?
2. What are the strengths and weaknesses of the proposed approach compared to prior works like Raman et al. (2016)?
3. Do you have any questions or concerns about the formulation, updates, and convergence rates presented in the paper?
4. How does the reviewer assess the significance of the U-max trick and implicit SGD approach introduced by the authors?
5. Are there any limitations or potential improvements regarding the experimental comparisons and the choice of step size?
6. How could the presentation of the paper be improved, particularly in stating the algorithms explicitly in the main paper? | Review | Review
The paper presents interesting algorithms for minimizing softmax with many classes. The objective function is a multi-class classification problem (using softmax loss) and with linear model. The main idea is to rewrite the obj as double-sum using the dual formulation and then apply SGD to solve it. At each iteration, SGD samples a subset of training samples and labels. The main contribution of this paper is: 1) proposing a U-max trick to improve the numerical stability and 2) proposing an implicit SGD approach. It seems the implicit SGD approach is better in the experimental comparisons.
I found the paper quite interesting, but meanwhile I have the following comments and questions:
- As pointed out by the authors, the idea of this formulation and doubly SGD is not new. (Raman et al, 2016) has used a similar trick to derive the double-sum formulation and solved it by doubly SGD. The authors claim that the algorithm in (Raman et al) has an O(NKD) cost for updating u at the end of each epoch. However, since each epoch requires at least O(NKD) time anyway (sometimes larger, as in Proposition 2), is another O(NKD) a significant bottleneck? Also, since the formulation is similar to (Raman et al., 2016), a comparison is needed.
- I'm confused by Proposition 1 and 2. In appendix E.1, the formulation of the update is derived, but why we need Newton to get log(1/epsilon) time complexity? I think most first order methods instead of Newton will have linear converge (log(1/epsilon) time)? Also, I guess we are assuming the obj is strongly convex?
- The step size is selected in one dataset and used for all others. This might lead to divergence of other algorithms, since usually step size depends on data. As we can see, OVE, NCE and IS diverges on Wiki-small, which may be fixed if the step size is chosen for each data (in practice we can choose using subsamples for each data).
- All the comparisons are based on "epochs", but the competing algorithms are quite different and can have very different running time for each epoch. For example, implicit SGD has another iterative solver for each update. Therefore, the timing comparison is needed in this paper to justify that implicit SGD is faster.
- The claim that "implicit SGD never overshoots the optimum" needs more supports. Is it proved in some previous papers?
- The presentation can be improved. I think it will be helpful to state the algorithms explicitly in the main paper. |
ICLR | Title
Unbiased scalable softmax optimization
Abstract
Recent neural network and language models have begun to rely on softmax distributions with an extremely large number of categories. In this context calculating the softmax normalizing constant is prohibitively expensive. This has spurred a growing literature of efficiently computable but biased estimates of the softmax. In this paper we present the first two unbiased algorithms for maximizing the softmax likelihood whose work per iteration is independent of the number of classes and datapoints (and does not require extra work at the end of each epoch). We compare our unbiased methods’ empirical performance to the state-of-the-art on seven real world datasets, where they comprehensively outperform all competitors.
1 INTRODUCTION
Under the softmax model1 the probability that a random variable y takes on the label ` ∈ {1, ...,K}, is given by
p(y = `|x;W ) = e x>w`∑K
k=1 e x>wk
, (1)
where x ∈ RD is the covariate, wk ∈ RD is the vector of parameters for the k-th class, and W = [w1, w2, ..., wK ] ∈ RD×K is the parameter matrix. Given a dataset of N label-covariate pairs D = {(yi, xi)}Ni=1, the ridge-regularized maximum log-likelihood problem is given by
L(W ) = N∑ i=1 x>i wyi − log( K∑ k=1 ex > i wk)− µ 2 ‖W‖22, (2)
where ‖W‖2 denotes the Frobenius norm. This paper focusses on how to maximize (2) when N,K,D are all large. Having large N,K,D is increasingly common in modern applications such as natural language processing and recommendation systems, where N,K,D can each be on the order of millions or billions (Partalas et al., 2015; Chelba et al., 2013; Bhatia et al.).
A natural approach to maximizing L(W ) with large N,K,D is to use Stochastic Gradient Descent (SGD), sampling a mini-batch of datapoints each iteration. However if K,D are large then the O(KD) cost of calculating the normalizing sum ∑K k=1 e
x>i wk in the stochastic gradients can still be prohibitively expensive. Several approximations that avoid calculating the normalizing sum have been proposed to address this difficulty. These include tree-structured methods (Bengio et al., 2003; Daume III et al., 2016; Grave et al., 2016), sampling methods (Bengio & Senécal, 2008; Mnih & Teh, 2012; Joshi et al., 2017) and self-normalization (Andreas & Klein, 2015). Alternative models such as the spherical family of losses (de Brébisson & Vincent, 2015; Vincent et al., 2015) that do not require normalization have been proposed to sidestep the issue entirely (Martins & Astudillo, 2016). Krishnapuram et al. (2005) avoid calculating the sum using a maximization-majorization approach based on lower-bounding the eigenvalues of the Hessian matrix. All2 of these approximations are computationally tractable for largeN,K,D, but are unsatisfactory in that they are biased and do not converge to the optimal W ∗ = argmaxL(W ).
1Also known as the multinomial logit model. 2The method of Krishnapuram et al. (2005) does converge to the optimal MLE, but has O(ND) runtime
per iteration which is not feasible for large N,D.
Recently Raman et al. (2016) managed to recast (2) as a double-sum overN andK. This formulation is amenable to SGD that samples both a datapoint and class each iteration, reducing the per iteration cost to O(D). The problem is that vanilla SGD when applied to this formulation is unstable, in that the gradients suffer from high variance and are susceptible to computational overflow. Raman et al. (2016) deal with this instability by occasionally calculating the normalizing sum for all datapoints at a cost of O(NKD). Although this achieves stability, its high cost nullifies the benefit of the cheap O(D) per iteration cost.
The goal of this paper is to develop robust SGD algorithms for optimizing double-sum formulations of the softmax likelihood. We develop two such algorithms. The first is a new SGD method called U-max, which is guaranteed to have bounded gradients and converge to the optimal solution of (2) for all sufficiently small learning rates. The second is an implementation of Implicit SGD, a stochastic gradient method that is known to be more stable than vanilla SGD and yet has similar convergence properties (Toulis et al., 2016). We show that the Implicit SGD updates for the doublesum formulation can be efficiently computed and has a bounded step size, guaranteeing its stability.
We compare the performance of U-max and Implicit SGD to the (biased) state-of-the-art methods for maximizing the softmax likelihood which cost O(D) per iteration. Both U-max and Implicit SGD outperform all other methods. Implicit SGD has the best performance with an average log-loss 4.29 times lower than the previous state-of-the-art.
In summary, our contributions in this paper are that we:
1. Provide a simple derivation of the softmax double-sum formulation and identify why vanilla SGD is unstable when applied to this formulation (Section 2).
2. Propose the U-max algorithm to stabilize the SGD updates and prove its convergence (Section 3.1).
3. Derive an efficient Implicit SGD implementation, analyze its runtime and bound its step size (Section 3.2).
4. Conduct experiments showing that both U-max and Implicit SGD outperform the previous state-of-the-art, with Implicit SGD having the best performance (Section 4).
2 CONVEX DOUBLE-SUM FORMULATION
2.1 DERIVATION OF DOUBLE-SUM
In order to have an SGD method that samples both datapoints and classes each iteration, we need to represent (2) as a double-sum over datapoints and classes. We begin by rewriting (2) in a more convenient form,
L(W ) = N∑ i=1 − log(1 + ∑ k 6=yi ex > i (wk−wyi ))− µ 2 ‖W‖22. (3)
The key to converting (3) into its double-sum representation is to express the negative logarithm using its convex conjugate:
− log(a) = max v<0 {av − (− log(−v)− 1)} = max u {−u− exp(−u)a+ 1} (4)
where u = − log(−v) and the optimal value of u is u∗(a) = log(a). Applying (4) to each of the logarithmic terms in (3) yields
L(W ) = N∑ i=1 max ui∈R {−ui − e−ui(1 + ∑ k 6=yi ex > i (wk−wyi )) + 1} − µ 2 ‖W‖22
= −min u≥0 {f(u,W )}+N,
where
f(u,W ) = N∑ i=1 ∑ k 6=yi ui + e −ui K − 1 + ex > i (wk−wyi )−ui + µ 2 ‖W‖22 (5)
is our double-sum representation that we seek to minimize and the optimal solution for ui is u∗i (W ) = log(1 + ∑ k 6=yi e
x>i (wk−wyi )) ≥ 0. Clearly f is a jointly convex function in u and W . In Appendix A we prove that the optimal value of u and W is contained in a compact convex set and that f is strongly convex within this set. Thus performing projected-SGD on f is guaranteed to converge to a unique optimum with a convergence rate of O(1/T ) where T is the number of iterations (Lacoste-Julien et al., 2012).
2.2 INSTABILITY OF VANILLA SGD
The challenge in optimizing f using SGD is that it can have problematically large magnitude gradients. Observe that f = Eik[fik] where i ∼ unif({1, ..., N}), k ∼ unif({1, ...,K} − {yi}) and
fik(u,W ) = N ( ui + e −ui + (K − 1)ex > i (wk−wyi )−ui) ) + µ
2 (βyi‖wyi‖22 + βk‖wk‖22), (6)
where βj = Nnj+(N−nj)(K−1) is the inverse of the probability of class j being sampled either through i or k, and nj = |{i : yi = j, i = 1, ..., N}|. The corresponding stochastic gradient is:
∇wkfik(u,W ) = N(K − 1)ex > i (wk−wyi )−uixi + µβkwk
∇wyi fik(u,W ) = −N(K − 1)e x>i (wk−wyi )−uixi + µβyiwyi
∇wj /∈{k,yi}fik(u,W ) = 0
∇uifik(u,W ) = −N(K − 1)ex > i (wk−wyi )−ui +N(1− e−ui) (7)
If ui equals its optimal value u∗i (W ) = log(1+ ∑ k 6=yi e x>i (wk−wyi )) then ex > i (wk−wyi )−ui ≤ 1 and the magnitude of the N(K − 1) terms in the stochastic gradient are bounded by N(K − 1)‖xi‖2. However if ui x>i (wk −wyi), then ex > i (wk−wyi )−ui 1 and the magnitude of the gradients can become extremely large.
Extremely large gradients lead to two major problems: (a) the gradients may computationally overflow floating-point precision and cause the algorithm to crash, (b) they result in the stochastic gradient having high variance, which leads to slow convergence3. In Section 4 we show that these problems occur in practice and make vanilla SGD both an unreliable and inefficient method4.
The sampled softmax optimizers in the literature (Bengio & Senécal, 2008; Mnih & Teh, 2012; Joshi et al., 2017) do not have the issue of large magnitude gradients. Their gradients are bounded by N(K−1)‖xi‖2 due to their approximations to u∗i (W ) always being greater than x>i (wk−wyi). For example, in one-vs-each (Titsias, 2016), u∗i (W ) is approximated by log(1 + e
x>i (wk−wyi )) > x>i (wk−wyi). However, as they only approximate u∗i (W ) they cannot converge to the optimalW ∗. The goal of this paper is to design reliable and efficient SGD algorithms for optimizing the doublesum formulation f(u,W ) in (5). We propose two such methods: U-max (Section 3.1) and an implementation of Implicit SGD (Section 3.2). But before we introduce these methods we should establish that f is a good choice for the double-sum formulation.
2.3 CHOICE OF DOUBLE-SUM FORMULATION
The double-sum in (5) is different to that of Raman et al. (2016). Their formulation can be derived by applying the convex conjugate substitution to (2) instead of (3). The resulting equations are L(W ) = minū { 1 N ∑N i=1 1 K−1 ∑ k 6=yi f̄ik(ū,W ) } +N where
f̄ik(ū,W ) = N ( ūi − x>i wyi + ex > i wyi−ūi + (K − 1)ex > i wk−ūi ) + µ
2 (βyi‖wyi‖22 + βk‖wk‖22)
(8) 3The convergence rate of SGD is inversely proportional to the second moment of its gradients (LacosteJulien et al., 2012). 4The same problems arise if we approach optimizing (3) via stochastic composition optimization (Wang et al., 2016). As is shown in Appendix B, stochastic composition optimization yields near-identical expressions for the stochastic gradients in (7) and has the same stability issues.
and the optimal solution for ūi is ū∗i (W ∗) = log( ∑K k=1 e x>i w ∗ k).
Although both double-sum formulations can be used as a basis for SGD, our formulation tends to have smaller magnitude stochastic gradients and hence faster convergence. To see this, note that typically x>i wyi = argmaxk{x>i wk} and so the ūi, x>i wyi and ex > i wyi−ūi terms in (8) are of the greatest magnitude. Although at optimality these terms should roughly cancel, this will not be the case during the early stages of optimization, leading to stochastic gradients of large magnitude. In contrast the function fik in (6) only has x>i wyi appearing as a negative exponent, and so if x > i wyi is large then the magnitude of the stochastic gradients will be small. In Section 4 we present numerical results confirming that our double-sum formulation leads to faster convergence.
3 STABLE SGD METHODS
3.1 U-MAX METHOD
As explained in Section 2.2, vanilla SGD has large gradients when ui x>i (wk − wyi). This can only occur when ui is less than its optimum value for the current W , since u∗i (W ) = log(1 +∑ j 6=yi e x>i (wk−wyi )) ≥ x>i (wk − wyi). A simple remedy is to set ui = log(1 + ex > i (wk−wyi )) whenever ui x>i (wk − wyi). Since log(1 + ex > i (wk−wyi )) > x>i (wk − wyi) this guarantees that ui > x > i (wk − wyi) and so the gradients are bounded. It also brings ui closer5 to its optimal value for the current W and thereby decreases the the objective f(u,W ).
This is exactly the mechanism behind the U-max algorithm — see Algorithm 1 in Appendix C for its pseudocode. U-max is the same as vanilla SGD except for two modifications: (a) ui is set equal to log(1 + ex > i (wk−wyi )) whenever ui ≤ log(1 + ex > i (wk−wyi )) − δ for some threshold δ > 0, (b) ui is projected onto [0, Bu], and W onto {W : ‖W‖2 ≤ BW }, where Bu and BW are set so that the optimal u∗i ∈ [0, Bu] and the optimal W ∗ satisfies ‖W ∗‖2 ≤ BW . See Appendix A for more details on how to set Bu and BW .
Theorem 1. Let Bf = max‖W‖22≤B2W ,0≤u≤Bu, maxik ‖∇fik(u,W )‖2. Suppose a learning rate ηt ≤ δ2/(4B2f ), then U-max with threshold δ converges to the optimum of (2), and the rate is at least as fast as SGD with same learning rate, in expectation.
Proof. The proof is provided in Appendix D.
U-max directly resolves the problem of extremely large gradients. Modification (a) ensures that δ ≥ x>i (wk − wyi) − ui (otherwise ui would be increased to log(1 + ex > i (wk−wyi ))) and so the magnitude of the U-max gradients are bounded above by N(K − 1)eδ‖xi‖2. In U-max there is a trade-off between the gradient magnitude and learning rate that is controlled by δ. For Theorem 1 to apply we require that the learning rate ηt ≤ δ2/(4B2f ). A small δ yields small magnitude gradients, which makes convergence fast, but necessitates a small ηt, which makes convergence slow.
3.2 IMPLICIT SGD
Another method that solves the large gradient problem is Implicit SGD6 (Bertsekas, 2011; Toulis et al., 2016). Implicit SGD uses the update equation
θ(t+1) = θ(t) − ηt∇f(θ(t+1), ξt), (9)
where θ(t) is the value of the tth iterate, f is the function we seek to minimize and ξt is a random variable controlling the stochastic gradient such that ∇f(θ) = Eξt [∇f(θ, ξt)]. The update (9) differs from vanilla SGD in that θ(t+1) appears on both the left and right side of the equation,
5Since ui < x>i (wk − wyi) < log(1 + ex > i (wk−wyi )) < log(1 + ∑ j 6=yi e
x>i (wk−wyi )) = u∗i (W ). 6Also known to as an “incremental proximal algorithm” (Bertsekas, 2011).
whereas in vanilla SGD it appears only on the left side. In our case θ = (u,W ) and ξt = (it, kt) with∇f(θ(t+1), ξt) = ∇fit,kt(u(t+1),W (t+1)). Although Implicit SGD has similar convergence rates to vanilla SGD, it has other properties that can make it preferable over vanilla SGD. It is known to be more robust to the learning rate (Toulis et al., 2016), which important since a good value for the learning rate is never known a priori. Another property, which is of particular interest to our problem, is that it has smaller step sizes.
Proposition 1. Consider applying Implicit SGD to optimizing f(θ) = Eξ[f(θ, ξ)] where f(θ, ξ) is m-strongly convex for all ξ. Then
‖∇f(θ(t+1), ξt)‖2 ≤ ‖∇f(θ(t), ξt)‖2 −m‖θ(t+1) − θ(t)‖2
and so the Implicit SGD step size is smaller than that of vanilla SGD.
Proof. The proof is provided in Appendix E.
The bound in Proposition 1 can be tightened for our particular problem. Unlike vanilla SGD whose step size magnitude is exponential in x>i (wk−wyi)−ui, as shown in (7), for Implicit SGD the step size is asymptotically linear in x>i (wk − wyi) − ui. This effectively guarantees that Implicit SGD cannot suffer from computational overflow.
Proposition 2. Consider the Implicit SGD algorithm where in each iteration only one datapoint i and one class k 6= yi is sampled and there is no ridge regularization. The magnitude of its step size in w is O(x>i (wk − wyi)− ui).
Proof. The proof is provided in Appendix F.2.
The difficulty in applying Implicit SGD is that in each iteration one has to compute a solution to (9). The tractability of this procedure is problem dependent. We show that computing a solution to (9) is indeed tractable for the problem considered in this paper. The details of these mechanisms are laid out in full in Appendix F.
Proposition 3. Consider the Implicit SGD algorithm where in each iteration n datapoints and m classes are sampled. Then the Implicit SGD update θ(t+1) can be computed to within accuracy in runtime O(n(n+m)(D + n log( −1))).
Proof. The proof is provided in Appendix F.3.
In Proposition 3 the log( −1) factor comes from applying a first order method to solve the strongly convex Implicit SGD update equation. It may be the case that performing this optimization is more expensive than computing the x>i wk inner products, and so each iteration of Implicit SGD may be significantly slower than that of vanilla SGD or U-max. However, in the special case of n = m = 1 we can use the bisection method to give an explicit upper bound on the optimization cost.
Proposition 4. Consider the Implicit SGD algorithm with learning rate η where in each iteration only one datapoint i and one class k 6= yi is sampled and there is no ridge regularization. Then the Implicit SGD iterate θ(t+1) can be computed to within accuracy with only two D-dimensional vector inner products and at most log2(
−1)+log2(|x>i (wk−wyi)−ui|+2ηN‖xi‖22 +log(K−1)) bisection method function evaluations.
Proof. The proof is provided in Appendix F.1
For any reasonably large dimensionD, the cost of the twoD-dimensional vector inner products will outweigh the cost of the bisection, and Implicit SGD will have roughly the same speed per iteration as vanilla SGD or U-max.
In summary, Implicit SGD is robust to the learning rate, does not have overflow issues and its updates can be computed in roughly the same time as vanilla SGD.
4 EXPERIMENTS
Two sets of experiments were conducted to assess the performance of the proposed methods. The first compares U-max and Implicit SGD to the state-of-the-art over seven real world datasets. The second investigates the difference in performance between the two double-sum formulations discussed in Section 2.3. We begin by specifying the experimental setup and then move onto the results.
4.1 EXPERIMENTAL SETUP
Data. We used the MNIST, Bibtex, Delicious, Eurlex, AmazonCat-13K, Wiki10, and Wiki-small datasets7, the properties of which are summarized in Table 1. Most of the datasets are multi-label and, as is standard practice (Titsias, 2016), we took the first label as being the true label and discarded the remaining labels. To make the computation more manageable, we truncated the number of features to be at most 10,000 and the training and test size to be at most 100,000. If, as a result of the dimension truncation, a datapoint had no non-zero features then it was discarded. The features of each dataset were normalized to have unit L2 norm. All of the datasets were pre-separated into training and test sets. We only focus on the performance on the algorithms on the training set, as the goal in this paper is to investigate how best to optimize the softmax likelihood, which is given over the training set.
Algorithms. We compared our algorithms to the state-of-the-art methods for optimizing the softmax which have runtime O(D) per iteration8. The competitors include Noise Contrastive Estimation (NCE) (Mnih & Teh, 2012), Importance Sampling (IS) (Bengio & Senécal, 2008) and One-Vs-Each (OVE) (Titsias, 2016). Note that these methods are all biased and will not converge to the optimal softmax MLE, but something close to it. For these algorithms we set n = 100,m = 5, which are standard settings9. For Implicit SGD we chose to implement the version in Proposition 4 which has n = 1,m = 1. Likewise for U-max we set n = 1,m = 1 and the threshold parameter δ = 1. The ridge regularization parameter µ was set to zero for all algorithms.
Epochs and losses. Each algorithm is run for 50 epochs on each dataset. The learning rate is decreased by a factor of 0.9 each epoch. Both the prediction error and log-loss (2) are recorded at the end of 10 evenly spaced epochs over the 50 epochs.
Learning rate. The magnitude of the gradient differs in each algorithm, due to either under- or overestimating the log-sum derivative from (2). To set a reasonable learning rate for each algorithm on
7All of the datasets were downloaded from http://manikvarma.org/downloads/XC/ XMLRepository.html, except Wiki-small which was obtained from http://lshtc.iit. demokritos.gr/.
8Raman et al. (2016) have runtime O(NKD) per epoch, which is equivalent to O(KD) per iteration. This is a factor of K slower than the methods we compare against.
9We also experimented setting n = 1,m = 1 in these methods and there was virtually no difference except the runtime was slower. For example, in Appendix G we plot the performance of NCE with n = 1,m = 1 and n = 100,m = 5 applied to the Eurlex dataset for different learning rates and there is very little difference between the two.
each dataset, we ran them on 10% of the training data with initial learning rates η = 100,±1,±2,±3. The learning rate with the best performance after 50 epochs is then used when the algorithm is applied to the full dataset. The tuned learning rates are presented in Table 2. Note that vanilla SGD requires a very small learning rate, otherwise it suffered from overflow.
4.2 RESULTS
Comparison to state-of-the-art. Plots of the performance of the algorithms on each dataset are displayed in Figure 1 with the relative performance compared to Implicit SGD given in Table 3. The Implicit SGD method has the best performance on virtually all datasets. Not only does it converge faster in the first few epochs, it also converges to the optimal MLE (unlike the biased methods that prematurely plateau). On average after 50 epochs, Implicit SGD’s log-loss is a factor of 4.29 lower than the previous state-of-the-art. The U-max algorithm also outperforms the previous state-of-theart on most datasets. U-max performs better than Implicit SGD on AmazonCat, although in general Implicit SGD has superior performance. Vanilla SGD’s performance is better than the previous state-of-the-art but worse than U-max and Implicit SGD. The difference in performance between vanilla SGD and U-max can largely be explained by vanilla SGD requiring a smaller learning rate to avoid computational overflow.
The sensitivity of each method to the initial learning rate can be seen in Appendix G, where the results of running each method on the Eurlex dataset with learning rates η = 100,±1,±2,±3 is presented. The results are consistent with those in Figure 1, with Implicit SGD having the best performance for most learning rate settings. For learning rates η = 103,4 the U-max log-loss is extremely large. This can be explained by Theorem 1, which does not guarantee convergence for U-max if the learning rate is too high.
Comparison of double-sum formulations. Figure 2 illustrates the performance on the Eurlex dataset of U-max using the proposed double-sum in (6) compared to U-max using the double-sum of Raman et al. (2016) in (8). The proposed double-sum clearly outperforms for all10 learning rates η = 100,±1,±2,−3,−4, with its 50th-epoch log-loss being 3.08 times lower on average. This supports the argument from Section 2.3 that SGD methods applied to the proposed double-sum have smaller magnitude gradients and converge faster.
5 CONCLUSION
In this paper we have presented the U-max and Implicit SGD algorithms for optimizing the softmax likelihood. These are the first algorithms that require only O(D) computation per iteration (without extra work at the end of each epoch) that converge to the optimal softmax MLE. Implicit SGD can be efficiently implemented and clearly out-performs the previous state-of-the-art on seven real world datasets. The result is a new method that enables optimizing the softmax for extremely large number of samples and classes.
So far Implicit SGD has only been applied to the simple softmax, but could also be applied to any neural network where the final layer is the softmax. Applying Implicit SGD to word2vec type models, which can be viewed as softmaxes where both x and w are parameters to be fit, might be particularly fruitful.
10The learning rates η = 103,4 are not displayed in the Figure 2 for visualization purposes. It had similar behavior as η = 102.
A PROOF OF VARIABLE BOUNDS AND STRONG CONVEXITY
We first establish that the optimal values of u and W are bounded. Next, we show that within these bounds the objective is strongly convex and its gradients are bounded. Lemma 1 ((Raman et al., 2016)). The optimal value of W is bounded as ‖W ∗‖22 ≤ B2W where B2W = 2 µN log(K).
Proof.
−N log(K) = L(0) ≤ L(W ∗) ≤ −µ 2 ‖W ∗‖22
Rearranging gives the desired result.
Lemma 2. The optimal value of ui is bounded as u∗i ≤ Bu where Bu = log(1 + (K − 1)e2BxBw) and Bx = maxi{‖xi‖2}
Proof.
u∗i = log(1 + ∑ k 6=yi ex > i (wk−wyi ))
≤ log(1 + ∑ k 6=yi e‖xi‖2(‖wk‖2+‖wyi‖2))
≤ log(1 + ∑ k 6=yi e2BxBw)
= log(1 + (K − 1)e2BxBw)
Lemma 3. If ‖W‖22 ≤ B2W and ui ≤ Bu then f(u,W ) is strongly convex with convexity constant greater than or equal to min{exp(−Bu), µ}.
Proof. Let us rewrite f as
f(u,W ) = N∑ i=1 ui + e −ui + ∑ k 6=yi ex > i (wk−wyi )−ui + µ 2 ‖W‖22
= N∑ i=1 a>i θ + e −ui + ∑ k 6=yi eb > ikθ + µ 2 ‖W‖22.
where θ = (u>, w>1 , ..., w > k ) ∈ RN+KD with ai and bik being appropriately defined. The Hessian of f is
∇2f(θ) = N∑ i=1 e−uieie > i + ∑ k 6=yi eb > ikθbikb > ik + µ · diag{0N , 1KD}
where ei is the ith canonical basis vector, 0N is an N -dimensional vector of zeros and 1KD is a KD-dimensional vector of ones. It follows that
∇2f(θ) I ·min{ min 0≤u≤Bu {e−ui}, µ}
= I ·min{exp(−Bu), µ} 0.
Lemma 4. If ‖W‖22 ≤ B2W and ui ≤ Bu then the 2-norm of both the gradient of f and each stochastic gradient fik are bounded by
Bf = N max{1, eBu − 1}+ 2(NeBuBx + µmax k {βk}BW ).
Proof. By Jensen’s inequality max
‖W‖22≤B2W ,0≤u≤Bu ‖∇f(u,W )‖2 = max ‖W‖22≤B2W ,0≤u≤Bu ‖∇Eikfik(u,W )‖2
≤ max ‖W‖22≤B2W ,0≤u≤Bu Eik‖∇fik(u,W )‖2
≤ max ‖W‖22≤B2W ,0≤u≤Bu max ik ‖∇fik(u,W )‖2.
Using the results from Lemmas 1 and 2 and the definition of fik from (6), ‖∇uifik(u,W )‖2 = ‖N ( 1− e−ui − (K − 1)ex > i (wk−wyi )−ui) ) ‖2
= N |1− e−ui(1 + (K − 1)ex > i (wk−wyi ))|
≤ N max{1, (1 + (K − 1)e‖xi‖2(‖wk‖2+‖wyi‖2))− 1} ≤ N max{1, eBu − 1}
and for j indexing either the sampled class k 6= yi or the true label yi, ‖∇wjfik(u,W )‖2 = ‖ ±N(K − 1)ex > i (wk−wyi )−uixi + µβjwj‖2
≤ N(K − 1)e‖xi‖2(‖wk‖2+‖wyi‖2)‖xi‖2 + µβj‖wj‖2 ≤ NeBuBx + µmax
k {βk}BW .
Letting Bf = N max{1, eBu − 1}+ 2(NeBuBx + µmax
k {βk}BW )
we have ‖∇fik(u,W )‖2 ≤ ‖∇uifik(u,W )‖2 + ‖∇wkfik(u,W )‖2 + ‖∇wyi fik(u,W )‖2 = Bf . In conclusion: max
‖W‖22≤B2W ,0≤u≤Bu ‖∇f(u,W )‖2 ≤ max ‖W‖22≤B2W ,ui≤Bu, max ik ‖∇fik(u,W )‖2 ≤ Bf .
B STOCHASTIC COMPOSITION OPTIMIZATION
We can write the equation for L(W ) from (3) as (where we have set µ = 0 for notational simplicity),
L(W ) = − N∑ i=1 log(1 + ∑ k 6=yi ex > i (wk−wyi ))
= Ei[hi(Ek[gk(W )])] where i ∼ unif({1, ..., N}), k ∼ unif({1, ...,K}), hi(v) ∈ R, gk(W ) ∈ RN and
hi(v) = −N log(1 + e>i v)
[gk(W )]i =
{ Kex > i (wk−wyi ) if k 6= yi
0 otherwise .
Here e>i v = vi ∈ R is a variable that is explicitly kept track of with vi ≈ Ek[gk(W )]i =∑ k 6=yi e
x>i (wk−wyi ) (with exact equality in the limit as t → ∞). Clearly vi in stochastic composition optimization has a similar role as ui has in our formulation for f in (5).
If i, k are sampled with k 6= yi in stochastic composition optimization then the updates are of the form (Wang et al., 2016)
wyi = wyi + ηtNK ex > i (zk−zyi )
1 + vi xi
wk = wk − ηtNK ex > i (zk−zyi )
1 + vi xi,
where zk is a smoothed value of wk. These updates have the same numerical instability issues as vanilla SGD on f in (5): it is possible that e x>i zk
1+vi 1 where ideally we should have 0 ≤ e
x>i zk 1+vi ≤ 1.
C U-MAX PSEUDOCODE
Algorithm 1: U-max Input : Data D = {(yi, xi) : yi ∈ {1, . . . ,K}, xi ∈ Rd}Ni=1, number of classes K, number of
datapoints N , learning rate ηt, class sampling probability βk = Nnk+(N−nk)(K−1) , threshold parameter δ > 0, bound BW on W such that ‖W‖2 ≤ BW and bound Bu on u such that ui ≤ Bu for i = 1, ..., N
Output: W
1 Initialize 2 for k = 1 to K do 3 wk ← 0 4 end 5 for i = 1 to N do 6 ui ← log(K) 7 end
8 Run SGD 9 for t = 1 to T do
10 Sample indices 11 i ∼ unif({1, ..., N}) 12 k ∼ unif({1, ...,K} − {yi})
13 Increase ui 14 if ui < log(1 + ex > i (wk−wyi ))− δ then 15 ui ← log(1 + ex > i (wk−wyi ))
16 SGD step 17 wk ← wk − ηt[N(K − 1)ex > i (wk−wyi )−uixi + µβkwk] 18 wyi ← wyi − ηt[−N(K − 1)ex > i (wk−wyi )−uixi + µβyiwyi ] 19 ui ← ui − ηt[N(1− e−ui − (K − 1)ex > i (wk−wyi )−ui)]
20 Projection 21 wk ← wk ·min{1, BW /‖wk‖2} 22 wyi ← wyi ·min{1, BW /‖wyi‖2} 23 ui ← max{0,min{Bu, ui}} 24 end
D PROOF OF CONVERGENCE OF U-MAX METHOD
In this section we will prove the claim made in Theorem 1, that U-max converges to the softmax optimum. Before proving the theorem, we will need a lemma.
Lemma 5. For any δ > 0, if ui ≤ log(1+ex > i (wk−wyi ))−δ then setting ui = log(1+ex > i (wk−wyi )) decreases f(u,W ) by at least δ2/2.
Proof. As in Lemma 3, let θ = (u>, w>1 , ..., w > k ) ∈ RN+KD. Then setting ui = log(1 + ex > i (wk−wyi )) is equivalent to setting θ = θ + ∆ei where ei is the ith canonical basis vector and ∆ = log(1 + ex > i (wk−wyi ))− ui ≥ δ. By a second order Taylor series expansion
f(θ)− f(θ + ∆ei) ≥ ∇f(θ + ∆ei)>ei∆ + ∆2
2 e>i ∇2f(θ + λ∆ei)ei (10)
for some λ ∈ [0, 1]. Since the optimal value of ui for a given value of W is u∗i (W ) = log(1 +∑ k 6=yi e x>i (wk−wyi )) ≥ log(1+ex>i (wk−wyi )), we must have∇f(θ+∆ei)>ei ≤ 0. From Lemma 3
we also know that e>i ∇2f(θ + λ∆ei)ei = exp(−(ui + λ∆)) + ∑ k 6=yi ex > i (wk−wyi )−(ui+λ∆)
= exp(−λ∆)e−ui(1 + ∑ k 6=yi ex > i (wk−wyi )) = exp(−λ∆) exp(−(log(1 + ex > i (wk−wyi ))−∆))(1 +
∑ k 6=yi ex > i (wk−wyi ))
≥ exp(∆− λ∆) ≥ exp(∆−∆) = 1.
Putting in bounds for the gradient and Hessian terms in (10),
f(θ)− f(θ + ∆ei) ≥ ∆2 2 ≥ δ 2 2 .
Now we are in a position to prove Theorem 1.
Proof of Theorem 1. Let θ(t) = (u(t),W (t)) ∈ Θ denote the value of the tth iterate. Here Θ = {θ : ‖W‖22 ≤ B2W , ui ≤ Bu} is a convex set containing the optimal value of f(θ).
Let π(δ)i (θ) denote the operation of setting ui = log(1 + e x>i (wk−wyi )) if ui ≤ log(1 + ex > i (wk−wyi )) − δ. If indices i, k are sampled for the stochastic gradient and ui ≤ log(1 + ex > i (wk−wyi ))− δ, then the value of f at the t+ 1st iterate is bounded as
f(θ(t+1)) = f(πi(θ (t))− ηt∇fik(πi(θ(t))))
≤ f(πi(θ(t))) + max θ∈Θ ‖ηt∇fik(πi(θ))‖2 max θ∈Θ ‖∇f(θ)‖2
≤ f(πi(θ(t))) + ηtB2f ≤ f(θ(t))− δ2/2 + ηtB2f ≤ f(θ(t) − ηt∇fik(θ(t)))− δ2/2 + 2ηtB2f ≤ f(θ(t) − ηt∇fik(θ(t))),
since ηt ≤ δ2/(4B2f ) by assumption. Alternatively if ui ≥ log(1 + ex > i (wk−wyi ))− δ then
f(θ(t+1)) = f(πi(θ (t))− ηt∇fik(πi(θ(t))))
= f(θ(t) − ηt∇fik(θ(t))).
Either way f(θ(t+1)) ≤ f(θ(t) − ηt∇fik(θ(t))). Taking expectations with respect to i, k,
Eik[f(θ(t+1))] ≤ Eik[f(θ(t) − ηt∇fik(θ(t)))].
Finally let P denote the projection of θ onto Θ. Since Θ is a convex set containing the optimum we have f(P (θ)) ≤ f(θ) for any θ, and so
Eik[f(P (θ(t+1)))] ≤ Eik[f(θ(t) − ηt∇fik(θ(t)))],
which shows that the rate of convergence in expectation of U-max is at least as fast as that of standard SGD.
E PROOF OF GENERAL IMPLICIT SGD GRADIENT BOUND
Proof of Theorem 2. Let f(θ, ξ) be m-strongly convex for all ξ. The vanilla SGD step size is ηt‖∇f(θ(t), ξt)‖2 where ηt is the learning rate for the tth iteration. The Implicit SGD step size is ηt‖∇f(θ(t+1), ξt)‖2 where θ(t+1) satisfies θ(t+1) = θ(t) − ηt∇f(θ(t+1), ξt). Rearranging, ∇f(θ(t+1), ξt) = (θ(t)−θ(t+1))/ηt and so it must be the case that∇f(θ(t+1), ξt)>(θ(t)−θ(t+1)) = ‖∇f(θ(t+1), ξt)‖2‖θ(t) − θ(t+1)‖2. Our desired result follows:
‖∇f(θ(t), ξt)‖2 ≥ ∇f(θ(t))>(θ(t) − θ(t+1)) ‖θ(t) − θ(t+1)‖2
≥ ∇f(θ (t+1))>(θ(t) − θ(t+1)) +m‖θ(t) − θ(t+1)‖22
‖θ(t) − θ(t+1)‖2
= ‖∇f(θ(t+1))‖2‖θ(t) − θ(t+1)‖2 +m‖θ(t) − θ(t+1)‖22 ‖θ(t) − θ(t+1)‖2 = ‖∇f(θ(t+1))‖2 +m‖θ(t) − θ(t+1)‖2
where the first inequality is by Cauchy-Schwarz and the second inequality by strong convexity.
F UPDATE EQUATIONS FOR IMPLICIT SGD
In this section we will derive the updates for Implicit SGD. We will first consider the simplest case where only one datapoint (xi, yi) and a single class is sampled in each iteration with no regularizer. Then we will derive the more complicated update for when there are multiple datapoints and sampled classes with a regularizer.
F.1 SINGLE DATAPOINT, SINGLE CLASS, NO REGULARIZER
Equation (6) for the stochastic gradient for a single datapoint and single class with µ = 0 is
fik(u,W ) = N(ui + e −ui + (K − 1)ex > i (wk−wyi )−ui).
The Implicit SGD update corresponds to finding the variables optimizing
min u,W
{ 2ηfik(u,W ) + ‖u− ũ‖22 + ‖W − W̃‖22 } ,
where η is the learning rate and the tilde refers to the value of the old iterate (Toulis et al., 2016, Eq. 6). Since fik is only a function of ui, wk, wyi the optimization reduces to
min ui,wk,wyi
{ 2ηfik(ui, wk, wyi) + (ui − ũi)2 + ‖wyi − w̃yi‖22 + ‖wk − w̃k‖22 } = min ui,wk,wyi { 2ηN(ui + e −ui + (K − 1)ex > i (wk−wyi )−ui)
+ (ui − ũi)2 + ‖wyi − w̃yi‖22 + ‖wk − w̃k‖22 } .
The optimal value of wk, wyi must deviate from the old value w̃k, w̃yi in the direction of xi. Furthermore we can observe that the deviation of wk must be exactly opposite that of wyi , that is:
wyi = w̃yi + a xi
2‖xi‖22 wk = w̃k − a
xi 2‖xi‖22
(11)
for some a ≥ 0. The optimization problem reduces to
min ui,a≥0
{ 2ηN(ui + e −ui + (K − 1)ex > i (w̃k−w̃yi )e−a−ui) + (ui − ũi)2 + a2 1
2‖xi‖22
} . (12)
We’ll approach this optimization problem by first solving for a as a function of ui and then optimize over ui. Once the optimal value of ui has been found, we can calculate the corresponding optimal value of a. Finally, substituting a into (11) will give us our updated value of W .
Solving for a
We solve for a by setting its derivative equal to zero in (12)
0 = ∂a { 2ηN(ui + e −ui + (K − 1)ex > i (w̃k−w̃yi )e−a−ui) + (ui − ũi)2 + a2 1
2‖xi‖22 } = −2ηN(K − 1)ex > i (w̃k−w̃yi )−uie−a + a 1
‖xi‖22 ⇔ aea = 2ηN(K − 1)‖xi‖22ex > i (w̃k−w̃yi )−ui . (13)
The solution for a can be written in terms of the principle branch of the Lambert W function P ,
a(ui) = P (2ηN(K − 1)‖xi‖22ex > i (w̃k−w̃yi )−ui)
= P (ex > i (w̃k−w̃yi )−ui+log(2ηN(K−1)‖xi‖ 2 2)). (14)
Substituting the solution to a(ui) into (12), we now only need minimize over ui:
min ui
{ 2ηNui+ 2ηNe −ui + 2ηN(K − 1)ex > i (w̃k−w̃yi )e−a(ui)−ui + (ui − ũi)2+ a(ui)2 1
2‖xi‖22 } = min
ui
{ 2ηNui + 2ηNe −ui + a(ui)‖xi‖−22 + (ui − ũi)2 + a(ui)2 1
2‖xi‖22
} (15)
where we used the fact that e−P (z) = P (z)/z. The derivative with respect to ui in (15) is
∂ui { 2ηNui + 2ηNe −ui + a(ui)‖xi‖−22 + (ui − ũi)2 + a(ui)2 1
2‖xi‖22 } = 2ηN − 2ηNe−ui + ∂uia(ui)‖xi‖−22 + 2(ui − ũi) + 2a(ui)∂uia(ui) 1
2‖xi‖22
= 2ηN − 2ηNe−ui − a(ui) 1 + a(ui) ‖xi‖−22 + 2(ui − ũi)− a(ui)
2
(1 + a(ui))‖xi‖22 (16)
where to calculate ∂uia(ui) we used the fact that ∂zP (z) = P (z) z(1+P (z)) and so
∂uia(ui) = − a(ui)
ex > i (w̃k−w̃yi )−ui+log(2ηN(K−1)‖xi‖ 2 2)(1 + a(ui))
ex > i (w̃k−w̃yi )−ui+log(2ηN(K−1)‖xi‖ 2 2)
= − a(ui) 1 + a(ui) .
Bisection method for ui
We can solve for ui using the bisection method. Below we show how to calculate the initial lower and upper bounds of the bisection interval and prove that the size of the interval is bounded (which ensures fast convergence).
Start by calculating the derivative in (16) at ui = ũi. If the derivative is negative then the optimal ui is lower bounded by ũi. An upper bound is provided by
ui = argmin ui
{ 2ηN(ui + e −ui + (K − 1)ex > i (w̃k−w̃yi )e−a(ui)−ui) + (ui − ũi)2 + a(ui) 2
2‖xi‖22 } ≤ argmin
ui
{ 2ηN(ui + e −ui + (K − 1)ex > i (w̃k−w̃yi )e−ui) + (ui − ũi)2 } ≤ argmin
ui
{ 2ηN(ui + e −ui + (K − 1)ex > i (w̃k−w̃yi )e−ui) } = log(1 + (K − 1)ex > i (w̃k−w̃yi )).
In the first inequality we set a(ui) = 0, since by the envelop theorem the gradient of ui is monotonically increasing in a. In the second inequality we used the assumption that ui is lower bounded by ũi. Thus if the derivative in (16) is negative at ui = ũi then ũi ≤ ui ≤ log(1+(K−1)ex > i (w̃k−w̃yi )). If (K− 1)ex>i (w̃k−w̃yi ) ≤ 1 then the size of the interval must be less than log(2), since ũi ≥ 0. Otherwise the gap must be at most log(2(K−1)ex>i (w̃k−w̃yi ))−ũi = log(2(K−1))+x>i (w̃k−w̃yi)−ũi. Either way, the gap is upper bounded by log(2(K − 1)) + |x>i (w̃k − w̃yi)− ũi|. Now let us consider if the derivative in (16) is positive at ui = ũi. Then ui is upper bounded by ũi. Denoting a′ as the optimal value of a, we can lower bound ui using (12)
ui = argmin ui
{ 2ηN(ui + e −ui + (K − 1)ex > i (w̃k−w̃yi )e−a ′−ui) + (ui − ũi)2 }
≥ argmin ui
{ ui + e −ui + (K − 1)ex > i (w̃k−w̃yi )e−a ′−ui }
= log(1 + (K − 1) exp(x>i (w̃k − w̃yi)− a′)) ≥ log(K − 1) + x>i (w̃k − w̃yi)− a′ (17)
where the first inequality comes dropping the (ui − ũi)2 term due to the assumption that ui < ũi. Recall (13),
a′ea ′ = 2ηN(K − 1)‖xi‖22ex > i (w̃k−w̃yi )−ui .
The solution for a′ is strictly monotonically increasing as a function of the right side of the equation. Thus replacing the right side with an upper bound on its value results in an upper bound on a′. Substituting the bound for ui,
a′ ≤ min{a : aea = 2ηN(K − 1)‖xi‖22ex > i (w̃k−w̃yi )−(log(K−1)+x > i (w̃k−w̃yi )−a)}
= min{a : a = 2ηN‖xi‖22} = 2ηN‖xi‖22. (18)
Substituting this bound for a′ into (17) yields
ui ≥ log(K − 1) + x>i (w̃k − w̃yi)− 2ηN‖xi‖22. Thus if the derivative in (16) is postive at ui = ũi then log(K− 1) +x>i (w̃k− w̃yi)− 2ηN‖xi‖22 ≤ ui ≤ ũi. The gap between the upper and lower bound is ũi−x>i (w̃k−w̃yi)+2ηN‖xi‖22−log(K−1). In summary, for both cases of the sign of the derivative in (16) at ui = ũi we are able to calculate a lower and upper bound on the optimal value of ui such that the gap between the bounds is at most |ũi − x>i (w̃k − w̃yi)| + 2ηN‖xi‖22 + log(K − 1). This allows us to perform the bisection method where for > 0 level accuracy we require only log2(
−1)+log2(|ũi−x>i (w̃k−w̃yi)|+2ηN‖xi‖22+ log(K − 1)) function evaluations.
F.2 BOUND ON STEP SIZE
Here we will prove that the step size magnitude of Implicit SGD with a single datapoint and sampled class with respect to w is bounded as O(x>i (w̃k − w̃yi)− ũi). We will do so by considering the two cases u′i ≥ ũi and u′i < ũi separately, where u′i denotes the optimal value of ui in the Implicit SGD update and ũi is its value at the previous iterate.
Case: u′i ≥ ũi
Let a′ denote the optimal value of a in the Implicit SGD update. From (14)
a′ = a(u′i)
= P (ex > i (w̃k−w̃yi )−u ′ i+log(2ηN(K−1)‖xi‖ 2 2))
= P (ex > i (w̃k−w̃yi )−ũi+log(2ηN(K−1)‖xi‖ 2 2)).
Now using the fact that P (z) = O(log(z)),
a′ = O(x>i (w̃k − w̃yi)− ũi + log(2ηN(K − 1)‖xi‖22)) = O(x>i (w̃k − w̃yi)− ũi)
Case: u′i < ũi
If u′i < ũi then we can lower bound a ′ from (18) as
a′ ≤ 2ηN‖xi‖22.
Combining cases
Putting together the two cases,
a′ = O(max{x>i (w̃k − w̃yi)− ũi, 2ηN‖xi‖22}) = O(x>i (w̃k − w̃yi)− ũi).
The actual step size in w is ±a xi 2‖xi‖22 . Since a is O(x>i (w̃k − w̃yi)− ũi), the step size magnitude is also O(x>i (w̃k − w̃yi)− ũi).
F.3 MULTIPLE DATAPOINTS, MULTIPLE CLASSES
The Implicit SGD update when there are multiple datapoints, multiple classes, with a regularizer is similar to the singe datapoint, singe class, no regularizer case described above. However, there are a few significant differences. Firstly, we will require some pre-computation to find a low-dimensional representation of the x values in each mini-batch. Secondly, we will integrate out ui for each datapoint (not wk). And thirdly, since the dimensionality of the simplified optimization problem is large, we’ll require first order or quasi-Newton methods to find the optimal solution.
F.3.1 DEFINING THE MINI-BATCH
The first step is to define our mini-batches of size n. We will do this by partitioning the datapoint indices into sets S1, ..., SJ with Sj = {j` : ` = 1, ..., n} for j = 1, ..., bN/nc, SJ = {J` : ` = 1, ..., N mod n}, Si ∩ Sj = ∅ and ∪Jj=1Sj = {1, ..., N}.
Next we define the set of classes Cj which can be sampled for the jth mini-batch. The set Cj is defined to be all sets of m distinct classes that are not equal to any of the labels y for points in the mini-batch, that is, Cj = {(k1, ..., km) : ki ∈ {1, ...,K}, ki 6= k` ∀` ∈ {1, ...,m} − {i}, ki 6= y` ∀` ∈ Sj}. Now we can write down our objective from (5) in terms of an expectation of functions corresponding to our mini-batches:
f(u,W ) = E[fj,C(u,W )]
where j is sampled with probability pj = |Sj |/N and C is sampled uniformly from Cj and
fj,C(u,W ) = p −1 j ∑ i∈Sj ui + e−ui + ∑ k∈Sj−{i} ex > i (wk−wyi )−ui + K − n m ∑ k∈C ex > i (wk−wyi )−ui + µ
2 ∑ k∈C∪Sj βk‖wk‖22.
The value of the regularizing constant βk is such that E[I[k ∈ C ∪ Sj ]βk] = 1, which requires that
β−1k = 1− 1
J J∑ j=1 I[k 6= Sj ](1− m K − |Sj | ).
F.3.2 SIMPLIFYING THE IMPLICIT SGD UPDATE EQUATION
The Implicit SGD update corresponds to solving
min u,W
{ 2ηfj,C(u,W ) + ‖u− ũ‖22 + ‖W − W̃‖22 } ,
where η is the learning rate and the tilde refers to the value of the old iterate (Toulis et al., 2016, Eq. 6). Since fj,C is only a function of uSj = {ui : i ∈ Sj} and Wj,C = {wk : k ∈ Sj ∪ C} the optimization reduces to
min uSj ,Wj,C
{ 2ηfj,C(uSj ,Wj,C) + ‖uSj − ũSj‖22 + ‖Wj,C − W̃j,C‖22 } .
The next step is to analytically minimize the uSj terms. The optimization problem in (21) decomposes into a sum of separate optimization problems in ui for i ∈ Sj ,
min ui
{ 2ηp−1j (ui + e −uidi) + (ui − ũi)2 }
where
di(Wj,C) = 1 + ∑
k∈Sj−{i}
ex > i (wk−wyi ) + K − n m ∑ k∈C ex > i (wk−wyi ).
Setting the derivative of ui equal to zero yields the solution
ui(Wj,C) = ũi − ηp−1j + P (ηp −1 j di(Wj,C) exp(ηp −1 j − ũi))
where P is the principle branch of the Lambert W function. Substituting this solution into our optimization problem and simplifying yields
min Wj,C {∑ i∈Sj (1 + P (ηp−1j di(Wj,C) exp(ηp −1 j − ũi))) 2 + ‖Wj,C − W̃j,C‖22 + µ 2 ∑ k∈C∪Sj βk‖wk‖22 } ,
(19)
where we have used the identity e−P (z) = P (z)/z. We can decompose (19) into two parts by splitting Wj,C = W ‖ j,C + W ⊥ j,C , its components parallel and perpendicular to the span of {xi : i ∈ Sj} respectively. Since the leading term in (19) only depends on W ‖j,C , the two resulting sub-problems are
min W ‖ j,C {∑ i∈Sj (1 + P (ηp−1j di(W ‖ j,C) exp(ηp −1 j − ũi))) 2 + ‖W ‖j,C − W̃ ‖ j,C‖ 2 2 + µ 2 ∑ k∈C∪Sj βk‖w‖k‖ 2 2 } ,
min W⊥j,C
{ ‖W⊥j,C − W̃⊥j,C‖22 + µ
2 ∑ k∈C∪Sj βk‖w⊥k ‖22 }
(20)
Let us focus on the perpendicular component first. Simple calculus yields the optimal value w⊥k = 1
1+µβk/2 w̃⊥k for k ∈ Sj ∪ C.
Moving onto the parallel component, let the span of {xi : i ∈ Sj} have an orthonormal basis11 Vj = (vj1, ..., vjn) ∈ RD×n with xi = Vjbi for some bi ∈ Rn. With this basis we can write w ‖ k = w̃ ‖ k + Vjak for ak ∈ Rn which reduces the parallel component optimization problem to12
min Aj,C {∑ i∈Sj (1 + P (zijC(Aj,C))) 2 + ∑ k∈Sj∪C (1 + µβk 2 )‖ak‖22 + µβkw̃>k Vjak } , (21)
where Aj,C = {ak : k ∈ Sj ∪ C} ∈ R(n+m)×n and
zijC(Aj,C) = ηp −1 j exp(ηp −1 j ) ( exp(−ũi) + ∑ k∈Sj−{i} ex > i (w̃k−w̃yi )−ũieb > i (ak−ayi )
+ K − n m ∑ k∈C ex > i (w̃k−w̃yi )−ũieb > i (ak−ayi ) ) .
11We have assumed here that dim(span({xi : i ∈ Sj})) = n, which will be most often the case. If the dimension of the span is lower than n then let Vj be of dimension D × dim(span({xi : i ∈ Sj})).
12Note that we have used w̃k instead of w̃ ‖ k in writing the parallel component optimization problem. This does not make a difference as w̃k always appears as an inner product with a vector in the span of {xi : i ∈ Sj}.
The eb > i (ak−ayi ) factors come from
x>i wk = x > i (w̃ ‖ k + a > k Vj)
= x>i w̃k + (Vjbi) >Vjak = x>i w̃k + b > i V > j Vjak = x>i w̃k + b > i ak,
since Vj is an orthonormal basis.
F.3.3 OPTIMIZING THE IMPLICIT SGD UPDATE EQUATION
To optimize (21) we need to be able to take the derivative:
∇a` ∑ i∈Sj (1 + P (zijC(Aj,C))) 2 + ∑ k∈Sj∪C (1 + µβk 2 )‖ak‖22 + µβkw̃>k Vjak = ∑ i∈Sj 2(1 + P (zijC(Aj,C)))∂zijC(Aj,C)P (zijC(Aj,C))∇a`zijC(Aj,C)
+ (2 + µβ`)a` + µβ`w̃ > ` Vj
= ∑ i∈Sj 2(1 + P (zijC(Aj,C))) P (zijC(Aj,C)) zijC(Aj,C)(1 + P (zijC(Aj,C))) ∇a`zijC(Aj,C)
+ (2 + µβ`)a` + µβ`w̃ > ` Vj
= ∑ i∈Sj 2 P (zijC(Aj,C)) zijC(Aj,C) ∇a`zijC(Aj,C) + (2 + µβ`)a` + µβ`w̃>` Vj
= ∑ i∈Sj 2e−P (zijC(Aj,C))∇a`zijC(Aj,C) + (2 + µβ`)a` + µβ`w̃>` Vj
where we used that ∂zP (z) = P (z) z(1+P (z)) and e −P (z) = P (z)/z. To complete the calculation of the derivate we need,
∇a`zijC(Aj,C) = ∇a`ηp −1 j exp(ηp −1 j ) ( exp(−ũi) + ∑ k∈Sj−{i} ex > i (w̃`−w̃yi )−ũieb > i (a`−ayi )
+ K − n m ∑ k∈C ex > i (w̃`−w̃yi )−ũieb > i (a`−ayi ) ) = ηp−1j exp(ηp −1 j )bi
· ( I[` ∈ Sj − {i}]ex > i (w̃`−w̃yi )−ũieb > i (a`−ayi )
+ I[` ∈ C]K − n m ex > i (w̃`−w̃yi )−ũieb > i (a`−ayi )
− I[` = yi] ( ∑ k∈Sj−{i} ex > i (w̃`−w̃yi )−ũieb > i (a`−ayi )
+ K − n m ∑ k∈C ex > i (w̃`−w̃yi )−ũieb > i (a`−ayi ) )) .
In order to calculate the full derivate with respect to Aj,C we need to calculate b>i ak for all i ∈ Sj and k ∈ Sj ∪ C. This is a total of n(n + m) inner products of n-dimensional vectors, costing O(n2(n + m)). To find the optimum of (21) we can use any optimization procedure that only uses gradients. Since (21) is strongly convex, standard first order methods can solve to accuracy in O(log( −1)) iterations (Boyd & Vandenberghe, 2004, Sec. 9.3). Thus once we can calculate all of the terms in (21), we can solve it to accuracy in runtime O(n2(n+m) log( −1)).
Once we have solved for Aj,C , we can reconstruct the optimal solution for the parallel component of wk as w ‖ k = w̃ ‖ k + Vjak. Recall that the solution to the perpendicular component is w ⊥ k =
1 1+µβk/2 w̃⊥k . Thus our optimal solution is wk = w̃ ‖ k + Vjak + 1 1+µβk/2 w̃⊥k .
If the features xi are sparse, then we’d prefer to do a sparse update to w, saving computation time. We can achieve this by letting
wk = γk · rk
where γk is a scalar and rk a vector. Updating wk = w̃ ‖ k + Vjak + 1 1+µβk/2 w̃⊥k is equivalent to
γk = γ̃k · 1
1 + µβk/2
rk = r̃k + µβk/2 · r̃‖k + γ̃ −1 k (1 + µβk/2) · Vjak.
Since we only update rk along the span of {xi : i ∈ Sj}, its update is sparse.
F.3.4 RUNTIME
There are two major tasks in calculating the terms in (21). The first is to calculate x>i w̃k for i ∈ Sj and k ∈ Sj ∪ C. There are a total of n(n + m) inner products of D-dimensional vectors, costing O(n(n + m)D). The other task is to find the orthonormal basis Vj of {xi : i ∈ Sj}, which can be achieved using the Gram-Schmidt process in O(n2D). We’ll assume that {Vj : j = 1, ..., J} is computed only once as a pre-processing step when defining the mini-batches. It is exactly because calculating {Vj : j = 1, ..., J} is expensive that we have fixed mini-batches that do not change during the optimization routine.
Adding the cost of calculating the x>i w̃k inner products to the costing of optimizing (21) leads to the claim that solve the Implicit SGD update formula to accuracy in runtime O(n(n+m)D+n2(n+ m) log( −1)) = O(n(n+m)(D + n log( −1))).
F.3.5 INITIALIZING THE IMPLICIT SGD OPTIMIZER
As was the case in Section F.1, it is important to initialize the optimization procedure at a point where the gradient is relatively small and can be computed without numerical issues. These numerical issues arise when an exponent x>i (w̃k − w̃yi) − ũi + b>i (ak − ayi) 0. To ensure that this does not occur for our initial point, we can solve the following linear problem,13
R = min Aj,C ∑ k∈C∪Sj ‖ak‖1
s.t. x>i (w̃k − w̃yi)− ũi + b>i (ak − ayi) ≤ 0 ∀i ∈ Sj , k ∈ C ∪ Sj (22)
Note that if k = yi then the constraint 0 ≥ x>i (w̃k−w̃yi)−ũi+b>i (ak−ayi) = −ũi is automatically fulfilled since ũi ≥ 0. Also observed that setting ak = −V >j w̃k satisfies all of the constraints, and so
R ≤ ∑
k∈C∪Sj
‖V >j w̃k‖1 ≤ (n+m) max k∈C∪Sj ‖V >j w̃k‖1.
We can use the solution to (22) to gives us an upper bound on (21). Consider the optimal value A
(R) j,C of the linear program in (22) with the value of the minimum being R. Since A (R) j,C satisfies
the constrain in (22) we have zijC(A (R) j,C ) ≤ Kηp −1 j exp(ηp −1 j ). Since P (z) is a monotonically increasing function that is non-negative for z ≥ 0 we also have (1 + P (zijC(A(R)j,C )))2 ≥ (1 + P (Kηp−1j exp(ηp −1 j ))) 2. Turning to the norms, we can use the fact that ‖a‖2 ≤ ‖a‖1 for any
13Instead bounding the constraints on the right with 0, we could also have used any small positive number, like 5.
vector a to bound∑ k∈Sj∪C (1 + µβk 2 )‖ak‖22 + µβkw̃>k Vjak
≤ ∑
k∈Sj∪C
(1 + µβk
2 )‖ak‖21 + µβk‖w̃>k Vj‖1‖ak‖1
≤ (
1 + µ · max k∈Sj∪C {βk}/2 ) ∑ k∈Sj∪C ‖ak‖21 + µ max k∈Sj∪C {βk‖w̃>k Vj‖1} ∑ k∈Sj∪C ‖ak‖1
≤ (
1 + µ · max k∈Sj∪C
{βk}/2 ) R2 + µ max
k∈Sj∪C {βk} max k∈Sj∪C {‖w̃>k Vj‖1}R
≤ (
1 + µ · max k∈Sj∪C
{βk}/2 )(
(n+m) max k∈C∪Sj
‖V >j w̃k‖1 )2
+ µ max k∈Sj∪C {βk} max k∈Sj∪C
{‖w̃>k Vj‖1} (
(n+m) max k∈C∪Sj
‖V >j w̃k‖1 )
≤ (1 + µ · max k∈Sj∪C {βk})(n+m)2 max k∈C∪Sj ‖V >j w̃k‖21
≤ (1 + µ · max k∈Sj∪C {βk})(n+m)2 max k∈C∪Sj ‖w̃k‖21.
Putting the bounds together we have that the optimal value of (21) is upper bounded by its value at the solution to (22), which in turn is upper bounded by
n(1 + P (Kηp−1j exp(ηp −1 j ))) 2 + (1 + µ · max k∈Sj∪C {βk})(n+m)2 max k∈C∪Sj ‖w̃k‖21.
This bound is guarantees that our initial iterate will be numerically stable.
G LEARNING RATE PREDICTION AND LOSS
Here we present the results of using different learning rates for each algorithm applied to the Eurlex dataset. In addition to the Implicit SGD, NCE, IS, OVE and U-max algorithms, we also provide results for NCE with n = 1,m = 1, denoted as NCE (1,1) . NCE and NCE (1,1) have near identical performance. | 1. What is the reviewer's concern regarding the paper's problem statement?
2. How does the reviewer feel about the proposed algorithms' convergence rate compared to existing strategies?
3. What does the reviewer think about the writing quality in some parts of the text?
4. What is the reviewer's issue with the title of the paper, specifically the use of the word "exact"? | Review | Review
The problem of numerical instability in applying SGD to soft-max minimization is the motivation. It would have been helpful if the author(s) could have made a formal statement.
Since the main contributions are two algorithms for stable SGD it is not clear how one can formally say that they are stable. For this a formal problem statement is necessary. The discussion around eq (7) is helpful but is intuitive and it is difficult to get a formal problem which we can use later to examine the proposed algorithms.
The proposed algorithms are variants of SGD but it is not clear why they should converge faster than existing strategies.
Some parts of the text are badly written, see for example the following line(see paragraph before Sec 3)
"Since the converge of SGD is
inversely proportional to the magnitude of its gradients (Lacoste-Julien et al., 2012), we expect the
formulation to converge faster."
which could have shed more light on the matter.
The title is also misleading in using the word "exact". I have understand it correct the proposed SGD method solves the optimization problem to an additive error.
In summary the algorithms are novel variants of SGD but the associated claims of numerical stability and speed of convergence vis-a-vis existing methods are missing. The choice of word exact is also not clear. |
ICLR | Title
Unbiased scalable softmax optimization
Abstract
Recent neural network and language models have begun to rely on softmax distributions with an extremely large number of categories. In this context calculating the softmax normalizing constant is prohibitively expensive. This has spurred a growing literature of efficiently computable but biased estimates of the softmax. In this paper we present the first two unbiased algorithms for maximizing the softmax likelihood whose work per iteration is independent of the number of classes and datapoints (and does not require extra work at the end of each epoch). We compare our unbiased methods’ empirical performance to the state-of-the-art on seven real world datasets, where they comprehensively outperform all competitors.
1 INTRODUCTION
Under the softmax model1 the probability that a random variable y takes on the label ` ∈ {1, ...,K}, is given by
p(y = `|x;W ) = e x>w`∑K
k=1 e x>wk
, (1)
where x ∈ RD is the covariate, wk ∈ RD is the vector of parameters for the k-th class, and W = [w1, w2, ..., wK ] ∈ RD×K is the parameter matrix. Given a dataset of N label-covariate pairs D = {(yi, xi)}Ni=1, the ridge-regularized maximum log-likelihood problem is given by
L(W ) = N∑ i=1 x>i wyi − log( K∑ k=1 ex > i wk)− µ 2 ‖W‖22, (2)
where ‖W‖2 denotes the Frobenius norm. This paper focusses on how to maximize (2) when N,K,D are all large. Having large N,K,D is increasingly common in modern applications such as natural language processing and recommendation systems, where N,K,D can each be on the order of millions or billions (Partalas et al., 2015; Chelba et al., 2013; Bhatia et al.).
A natural approach to maximizing L(W ) with large N,K,D is to use Stochastic Gradient Descent (SGD), sampling a mini-batch of datapoints each iteration. However if K,D are large then the O(KD) cost of calculating the normalizing sum ∑K k=1 e
x>i wk in the stochastic gradients can still be prohibitively expensive. Several approximations that avoid calculating the normalizing sum have been proposed to address this difficulty. These include tree-structured methods (Bengio et al., 2003; Daume III et al., 2016; Grave et al., 2016), sampling methods (Bengio & Senécal, 2008; Mnih & Teh, 2012; Joshi et al., 2017) and self-normalization (Andreas & Klein, 2015). Alternative models such as the spherical family of losses (de Brébisson & Vincent, 2015; Vincent et al., 2015) that do not require normalization have been proposed to sidestep the issue entirely (Martins & Astudillo, 2016). Krishnapuram et al. (2005) avoid calculating the sum using a maximization-majorization approach based on lower-bounding the eigenvalues of the Hessian matrix. All2 of these approximations are computationally tractable for largeN,K,D, but are unsatisfactory in that they are biased and do not converge to the optimal W ∗ = argmaxL(W ).
1Also known as the multinomial logit model. 2The method of Krishnapuram et al. (2005) does converge to the optimal MLE, but has O(ND) runtime
per iteration which is not feasible for large N,D.
Recently Raman et al. (2016) managed to recast (2) as a double-sum overN andK. This formulation is amenable to SGD that samples both a datapoint and class each iteration, reducing the per iteration cost to O(D). The problem is that vanilla SGD when applied to this formulation is unstable, in that the gradients suffer from high variance and are susceptible to computational overflow. Raman et al. (2016) deal with this instability by occasionally calculating the normalizing sum for all datapoints at a cost of O(NKD). Although this achieves stability, its high cost nullifies the benefit of the cheap O(D) per iteration cost.
The goal of this paper is to develop robust SGD algorithms for optimizing double-sum formulations of the softmax likelihood. We develop two such algorithms. The first is a new SGD method called U-max, which is guaranteed to have bounded gradients and converge to the optimal solution of (2) for all sufficiently small learning rates. The second is an implementation of Implicit SGD, a stochastic gradient method that is known to be more stable than vanilla SGD and yet has similar convergence properties (Toulis et al., 2016). We show that the Implicit SGD updates for the doublesum formulation can be efficiently computed and has a bounded step size, guaranteeing its stability.
We compare the performance of U-max and Implicit SGD to the (biased) state-of-the-art methods for maximizing the softmax likelihood which cost O(D) per iteration. Both U-max and Implicit SGD outperform all other methods. Implicit SGD has the best performance with an average log-loss 4.29 times lower than the previous state-of-the-art.
In summary, our contributions in this paper are that we:
1. Provide a simple derivation of the softmax double-sum formulation and identify why vanilla SGD is unstable when applied to this formulation (Section 2).
2. Propose the U-max algorithm to stabilize the SGD updates and prove its convergence (Section 3.1).
3. Derive an efficient Implicit SGD implementation, analyze its runtime and bound its step size (Section 3.2).
4. Conduct experiments showing that both U-max and Implicit SGD outperform the previous state-of-the-art, with Implicit SGD having the best performance (Section 4).
2 CONVEX DOUBLE-SUM FORMULATION
2.1 DERIVATION OF DOUBLE-SUM
In order to have an SGD method that samples both datapoints and classes each iteration, we need to represent (2) as a double-sum over datapoints and classes. We begin by rewriting (2) in a more convenient form,
L(W ) = N∑ i=1 − log(1 + ∑ k 6=yi ex > i (wk−wyi ))− µ 2 ‖W‖22. (3)
The key to converting (3) into its double-sum representation is to express the negative logarithm using its convex conjugate:
− log(a) = max v<0 {av − (− log(−v)− 1)} = max u {−u− exp(−u)a+ 1} (4)
where u = − log(−v) and the optimal value of u is u∗(a) = log(a). Applying (4) to each of the logarithmic terms in (3) yields
L(W ) = N∑ i=1 max ui∈R {−ui − e−ui(1 + ∑ k 6=yi ex > i (wk−wyi )) + 1} − µ 2 ‖W‖22
= −min u≥0 {f(u,W )}+N,
where
f(u,W ) = N∑ i=1 ∑ k 6=yi ui + e −ui K − 1 + ex > i (wk−wyi )−ui + µ 2 ‖W‖22 (5)
is our double-sum representation that we seek to minimize and the optimal solution for ui is u∗i (W ) = log(1 + ∑ k 6=yi e
x>i (wk−wyi )) ≥ 0. Clearly f is a jointly convex function in u and W . In Appendix A we prove that the optimal value of u and W is contained in a compact convex set and that f is strongly convex within this set. Thus performing projected-SGD on f is guaranteed to converge to a unique optimum with a convergence rate of O(1/T ) where T is the number of iterations (Lacoste-Julien et al., 2012).
2.2 INSTABILITY OF VANILLA SGD
The challenge in optimizing f using SGD is that it can have problematically large magnitude gradients. Observe that f = Eik[fik] where i ∼ unif({1, ..., N}), k ∼ unif({1, ...,K} − {yi}) and
fik(u,W ) = N ( ui + e −ui + (K − 1)ex > i (wk−wyi )−ui) ) + µ
2 (βyi‖wyi‖22 + βk‖wk‖22), (6)
where βj = Nnj+(N−nj)(K−1) is the inverse of the probability of class j being sampled either through i or k, and nj = |{i : yi = j, i = 1, ..., N}|. The corresponding stochastic gradient is:
∇wkfik(u,W ) = N(K − 1)ex > i (wk−wyi )−uixi + µβkwk
∇wyi fik(u,W ) = −N(K − 1)e x>i (wk−wyi )−uixi + µβyiwyi
∇wj /∈{k,yi}fik(u,W ) = 0
∇uifik(u,W ) = −N(K − 1)ex > i (wk−wyi )−ui +N(1− e−ui) (7)
If ui equals its optimal value u∗i (W ) = log(1+ ∑ k 6=yi e x>i (wk−wyi )) then ex > i (wk−wyi )−ui ≤ 1 and the magnitude of the N(K − 1) terms in the stochastic gradient are bounded by N(K − 1)‖xi‖2. However if ui x>i (wk −wyi), then ex > i (wk−wyi )−ui 1 and the magnitude of the gradients can become extremely large.
Extremely large gradients lead to two major problems: (a) the gradients may computationally overflow floating-point precision and cause the algorithm to crash, (b) they result in the stochastic gradient having high variance, which leads to slow convergence3. In Section 4 we show that these problems occur in practice and make vanilla SGD both an unreliable and inefficient method4.
The sampled softmax optimizers in the literature (Bengio & Senécal, 2008; Mnih & Teh, 2012; Joshi et al., 2017) do not have the issue of large magnitude gradients. Their gradients are bounded by N(K−1)‖xi‖2 due to their approximations to u∗i (W ) always being greater than x>i (wk−wyi). For example, in one-vs-each (Titsias, 2016), u∗i (W ) is approximated by log(1 + e
x>i (wk−wyi )) > x>i (wk−wyi). However, as they only approximate u∗i (W ) they cannot converge to the optimalW ∗. The goal of this paper is to design reliable and efficient SGD algorithms for optimizing the doublesum formulation f(u,W ) in (5). We propose two such methods: U-max (Section 3.1) and an implementation of Implicit SGD (Section 3.2). But before we introduce these methods we should establish that f is a good choice for the double-sum formulation.
2.3 CHOICE OF DOUBLE-SUM FORMULATION
The double-sum in (5) is different to that of Raman et al. (2016). Their formulation can be derived by applying the convex conjugate substitution to (2) instead of (3). The resulting equations are L(W ) = minū { 1 N ∑N i=1 1 K−1 ∑ k 6=yi f̄ik(ū,W ) } +N where
f̄ik(ū,W ) = N ( ūi − x>i wyi + ex > i wyi−ūi + (K − 1)ex > i wk−ūi ) + µ
2 (βyi‖wyi‖22 + βk‖wk‖22)
(8) 3The convergence rate of SGD is inversely proportional to the second moment of its gradients (LacosteJulien et al., 2012). 4The same problems arise if we approach optimizing (3) via stochastic composition optimization (Wang et al., 2016). As is shown in Appendix B, stochastic composition optimization yields near-identical expressions for the stochastic gradients in (7) and has the same stability issues.
and the optimal solution for ūi is ū∗i (W ∗) = log( ∑K k=1 e x>i w ∗ k).
Although both double-sum formulations can be used as a basis for SGD, our formulation tends to have smaller magnitude stochastic gradients and hence faster convergence. To see this, note that typically x>i wyi = argmaxk{x>i wk} and so the ūi, x>i wyi and ex > i wyi−ūi terms in (8) are of the greatest magnitude. Although at optimality these terms should roughly cancel, this will not be the case during the early stages of optimization, leading to stochastic gradients of large magnitude. In contrast the function fik in (6) only has x>i wyi appearing as a negative exponent, and so if x > i wyi is large then the magnitude of the stochastic gradients will be small. In Section 4 we present numerical results confirming that our double-sum formulation leads to faster convergence.
3 STABLE SGD METHODS
3.1 U-MAX METHOD
As explained in Section 2.2, vanilla SGD has large gradients when ui x>i (wk − wyi). This can only occur when ui is less than its optimum value for the current W , since u∗i (W ) = log(1 +∑ j 6=yi e x>i (wk−wyi )) ≥ x>i (wk − wyi). A simple remedy is to set ui = log(1 + ex > i (wk−wyi )) whenever ui x>i (wk − wyi). Since log(1 + ex > i (wk−wyi )) > x>i (wk − wyi) this guarantees that ui > x > i (wk − wyi) and so the gradients are bounded. It also brings ui closer5 to its optimal value for the current W and thereby decreases the the objective f(u,W ).
This is exactly the mechanism behind the U-max algorithm — see Algorithm 1 in Appendix C for its pseudocode. U-max is the same as vanilla SGD except for two modifications: (a) ui is set equal to log(1 + ex > i (wk−wyi )) whenever ui ≤ log(1 + ex > i (wk−wyi )) − δ for some threshold δ > 0, (b) ui is projected onto [0, Bu], and W onto {W : ‖W‖2 ≤ BW }, where Bu and BW are set so that the optimal u∗i ∈ [0, Bu] and the optimal W ∗ satisfies ‖W ∗‖2 ≤ BW . See Appendix A for more details on how to set Bu and BW .
Theorem 1. Let Bf = max‖W‖22≤B2W ,0≤u≤Bu, maxik ‖∇fik(u,W )‖2. Suppose a learning rate ηt ≤ δ2/(4B2f ), then U-max with threshold δ converges to the optimum of (2), and the rate is at least as fast as SGD with same learning rate, in expectation.
Proof. The proof is provided in Appendix D.
U-max directly resolves the problem of extremely large gradients. Modification (a) ensures that δ ≥ x>i (wk − wyi) − ui (otherwise ui would be increased to log(1 + ex > i (wk−wyi ))) and so the magnitude of the U-max gradients are bounded above by N(K − 1)eδ‖xi‖2. In U-max there is a trade-off between the gradient magnitude and learning rate that is controlled by δ. For Theorem 1 to apply we require that the learning rate ηt ≤ δ2/(4B2f ). A small δ yields small magnitude gradients, which makes convergence fast, but necessitates a small ηt, which makes convergence slow.
3.2 IMPLICIT SGD
Another method that solves the large gradient problem is Implicit SGD6 (Bertsekas, 2011; Toulis et al., 2016). Implicit SGD uses the update equation
θ(t+1) = θ(t) − ηt∇f(θ(t+1), ξt), (9)
where θ(t) is the value of the tth iterate, f is the function we seek to minimize and ξt is a random variable controlling the stochastic gradient such that ∇f(θ) = Eξt [∇f(θ, ξt)]. The update (9) differs from vanilla SGD in that θ(t+1) appears on both the left and right side of the equation,
5Since ui < x>i (wk − wyi) < log(1 + ex > i (wk−wyi )) < log(1 + ∑ j 6=yi e
x>i (wk−wyi )) = u∗i (W ). 6Also known to as an “incremental proximal algorithm” (Bertsekas, 2011).
whereas in vanilla SGD it appears only on the left side. In our case θ = (u,W ) and ξt = (it, kt) with∇f(θ(t+1), ξt) = ∇fit,kt(u(t+1),W (t+1)). Although Implicit SGD has similar convergence rates to vanilla SGD, it has other properties that can make it preferable over vanilla SGD. It is known to be more robust to the learning rate (Toulis et al., 2016), which important since a good value for the learning rate is never known a priori. Another property, which is of particular interest to our problem, is that it has smaller step sizes.
Proposition 1. Consider applying Implicit SGD to optimizing f(θ) = Eξ[f(θ, ξ)] where f(θ, ξ) is m-strongly convex for all ξ. Then
‖∇f(θ(t+1), ξt)‖2 ≤ ‖∇f(θ(t), ξt)‖2 −m‖θ(t+1) − θ(t)‖2
and so the Implicit SGD step size is smaller than that of vanilla SGD.
Proof. The proof is provided in Appendix E.
The bound in Proposition 1 can be tightened for our particular problem. Unlike vanilla SGD whose step size magnitude is exponential in x>i (wk−wyi)−ui, as shown in (7), for Implicit SGD the step size is asymptotically linear in x>i (wk − wyi) − ui. This effectively guarantees that Implicit SGD cannot suffer from computational overflow.
Proposition 2. Consider the Implicit SGD algorithm where in each iteration only one datapoint i and one class k 6= yi is sampled and there is no ridge regularization. The magnitude of its step size in w is O(x>i (wk − wyi)− ui).
Proof. The proof is provided in Appendix F.2.
The difficulty in applying Implicit SGD is that in each iteration one has to compute a solution to (9). The tractability of this procedure is problem dependent. We show that computing a solution to (9) is indeed tractable for the problem considered in this paper. The details of these mechanisms are laid out in full in Appendix F.
Proposition 3. Consider the Implicit SGD algorithm where in each iteration n datapoints and m classes are sampled. Then the Implicit SGD update θ(t+1) can be computed to within accuracy in runtime O(n(n+m)(D + n log( −1))).
Proof. The proof is provided in Appendix F.3.
In Proposition 3 the log( −1) factor comes from applying a first order method to solve the strongly convex Implicit SGD update equation. It may be the case that performing this optimization is more expensive than computing the x>i wk inner products, and so each iteration of Implicit SGD may be significantly slower than that of vanilla SGD or U-max. However, in the special case of n = m = 1 we can use the bisection method to give an explicit upper bound on the optimization cost.
Proposition 4. Consider the Implicit SGD algorithm with learning rate η where in each iteration only one datapoint i and one class k 6= yi is sampled and there is no ridge regularization. Then the Implicit SGD iterate θ(t+1) can be computed to within accuracy with only two D-dimensional vector inner products and at most log2(
−1)+log2(|x>i (wk−wyi)−ui|+2ηN‖xi‖22 +log(K−1)) bisection method function evaluations.
Proof. The proof is provided in Appendix F.1
For any reasonably large dimensionD, the cost of the twoD-dimensional vector inner products will outweigh the cost of the bisection, and Implicit SGD will have roughly the same speed per iteration as vanilla SGD or U-max.
In summary, Implicit SGD is robust to the learning rate, does not have overflow issues and its updates can be computed in roughly the same time as vanilla SGD.
4 EXPERIMENTS
Two sets of experiments were conducted to assess the performance of the proposed methods. The first compares U-max and Implicit SGD to the state-of-the-art over seven real world datasets. The second investigates the difference in performance between the two double-sum formulations discussed in Section 2.3. We begin by specifying the experimental setup and then move onto the results.
4.1 EXPERIMENTAL SETUP
Data. We used the MNIST, Bibtex, Delicious, Eurlex, AmazonCat-13K, Wiki10, and Wiki-small datasets7, the properties of which are summarized in Table 1. Most of the datasets are multi-label and, as is standard practice (Titsias, 2016), we took the first label as being the true label and discarded the remaining labels. To make the computation more manageable, we truncated the number of features to be at most 10,000 and the training and test size to be at most 100,000. If, as a result of the dimension truncation, a datapoint had no non-zero features then it was discarded. The features of each dataset were normalized to have unit L2 norm. All of the datasets were pre-separated into training and test sets. We only focus on the performance on the algorithms on the training set, as the goal in this paper is to investigate how best to optimize the softmax likelihood, which is given over the training set.
Algorithms. We compared our algorithms to the state-of-the-art methods for optimizing the softmax which have runtime O(D) per iteration8. The competitors include Noise Contrastive Estimation (NCE) (Mnih & Teh, 2012), Importance Sampling (IS) (Bengio & Senécal, 2008) and One-Vs-Each (OVE) (Titsias, 2016). Note that these methods are all biased and will not converge to the optimal softmax MLE, but something close to it. For these algorithms we set n = 100,m = 5, which are standard settings9. For Implicit SGD we chose to implement the version in Proposition 4 which has n = 1,m = 1. Likewise for U-max we set n = 1,m = 1 and the threshold parameter δ = 1. The ridge regularization parameter µ was set to zero for all algorithms.
Epochs and losses. Each algorithm is run for 50 epochs on each dataset. The learning rate is decreased by a factor of 0.9 each epoch. Both the prediction error and log-loss (2) are recorded at the end of 10 evenly spaced epochs over the 50 epochs.
Learning rate. The magnitude of the gradient differs in each algorithm, due to either under- or overestimating the log-sum derivative from (2). To set a reasonable learning rate for each algorithm on
7All of the datasets were downloaded from http://manikvarma.org/downloads/XC/ XMLRepository.html, except Wiki-small which was obtained from http://lshtc.iit. demokritos.gr/.
8Raman et al. (2016) have runtime O(NKD) per epoch, which is equivalent to O(KD) per iteration. This is a factor of K slower than the methods we compare against.
9We also experimented setting n = 1,m = 1 in these methods and there was virtually no difference except the runtime was slower. For example, in Appendix G we plot the performance of NCE with n = 1,m = 1 and n = 100,m = 5 applied to the Eurlex dataset for different learning rates and there is very little difference between the two.
each dataset, we ran them on 10% of the training data with initial learning rates η = 100,±1,±2,±3. The learning rate with the best performance after 50 epochs is then used when the algorithm is applied to the full dataset. The tuned learning rates are presented in Table 2. Note that vanilla SGD requires a very small learning rate, otherwise it suffered from overflow.
4.2 RESULTS
Comparison to state-of-the-art. Plots of the performance of the algorithms on each dataset are displayed in Figure 1 with the relative performance compared to Implicit SGD given in Table 3. The Implicit SGD method has the best performance on virtually all datasets. Not only does it converge faster in the first few epochs, it also converges to the optimal MLE (unlike the biased methods that prematurely plateau). On average after 50 epochs, Implicit SGD’s log-loss is a factor of 4.29 lower than the previous state-of-the-art. The U-max algorithm also outperforms the previous state-of-theart on most datasets. U-max performs better than Implicit SGD on AmazonCat, although in general Implicit SGD has superior performance. Vanilla SGD’s performance is better than the previous state-of-the-art but worse than U-max and Implicit SGD. The difference in performance between vanilla SGD and U-max can largely be explained by vanilla SGD requiring a smaller learning rate to avoid computational overflow.
The sensitivity of each method to the initial learning rate can be seen in Appendix G, where the results of running each method on the Eurlex dataset with learning rates η = 100,±1,±2,±3 is presented. The results are consistent with those in Figure 1, with Implicit SGD having the best performance for most learning rate settings. For learning rates η = 103,4 the U-max log-loss is extremely large. This can be explained by Theorem 1, which does not guarantee convergence for U-max if the learning rate is too high.
Comparison of double-sum formulations. Figure 2 illustrates the performance on the Eurlex dataset of U-max using the proposed double-sum in (6) compared to U-max using the double-sum of Raman et al. (2016) in (8). The proposed double-sum clearly outperforms for all10 learning rates η = 100,±1,±2,−3,−4, with its 50th-epoch log-loss being 3.08 times lower on average. This supports the argument from Section 2.3 that SGD methods applied to the proposed double-sum have smaller magnitude gradients and converge faster.
5 CONCLUSION
In this paper we have presented the U-max and Implicit SGD algorithms for optimizing the softmax likelihood. These are the first algorithms that require only O(D) computation per iteration (without extra work at the end of each epoch) that converge to the optimal softmax MLE. Implicit SGD can be efficiently implemented and clearly out-performs the previous state-of-the-art on seven real world datasets. The result is a new method that enables optimizing the softmax for extremely large number of samples and classes.
So far Implicit SGD has only been applied to the simple softmax, but could also be applied to any neural network where the final layer is the softmax. Applying Implicit SGD to word2vec type models, which can be viewed as softmaxes where both x and w are parameters to be fit, might be particularly fruitful.
10The learning rates η = 103,4 are not displayed in the Figure 2 for visualization purposes. It had similar behavior as η = 102.
A PROOF OF VARIABLE BOUNDS AND STRONG CONVEXITY
We first establish that the optimal values of u and W are bounded. Next, we show that within these bounds the objective is strongly convex and its gradients are bounded. Lemma 1 ((Raman et al., 2016)). The optimal value of W is bounded as ‖W ∗‖22 ≤ B2W where B2W = 2 µN log(K).
Proof.
−N log(K) = L(0) ≤ L(W ∗) ≤ −µ 2 ‖W ∗‖22
Rearranging gives the desired result.
Lemma 2. The optimal value of ui is bounded as u∗i ≤ Bu where Bu = log(1 + (K − 1)e2BxBw) and Bx = maxi{‖xi‖2}
Proof.
u∗i = log(1 + ∑ k 6=yi ex > i (wk−wyi ))
≤ log(1 + ∑ k 6=yi e‖xi‖2(‖wk‖2+‖wyi‖2))
≤ log(1 + ∑ k 6=yi e2BxBw)
= log(1 + (K − 1)e2BxBw)
Lemma 3. If ‖W‖22 ≤ B2W and ui ≤ Bu then f(u,W ) is strongly convex with convexity constant greater than or equal to min{exp(−Bu), µ}.
Proof. Let us rewrite f as
f(u,W ) = N∑ i=1 ui + e −ui + ∑ k 6=yi ex > i (wk−wyi )−ui + µ 2 ‖W‖22
= N∑ i=1 a>i θ + e −ui + ∑ k 6=yi eb > ikθ + µ 2 ‖W‖22.
where θ = (u>, w>1 , ..., w > k ) ∈ RN+KD with ai and bik being appropriately defined. The Hessian of f is
∇2f(θ) = N∑ i=1 e−uieie > i + ∑ k 6=yi eb > ikθbikb > ik + µ · diag{0N , 1KD}
where ei is the ith canonical basis vector, 0N is an N -dimensional vector of zeros and 1KD is a KD-dimensional vector of ones. It follows that
∇2f(θ) I ·min{ min 0≤u≤Bu {e−ui}, µ}
= I ·min{exp(−Bu), µ} 0.
Lemma 4. If ‖W‖22 ≤ B2W and ui ≤ Bu then the 2-norm of both the gradient of f and each stochastic gradient fik are bounded by
Bf = N max{1, eBu − 1}+ 2(NeBuBx + µmax k {βk}BW ).
Proof. By Jensen’s inequality max
‖W‖22≤B2W ,0≤u≤Bu ‖∇f(u,W )‖2 = max ‖W‖22≤B2W ,0≤u≤Bu ‖∇Eikfik(u,W )‖2
≤ max ‖W‖22≤B2W ,0≤u≤Bu Eik‖∇fik(u,W )‖2
≤ max ‖W‖22≤B2W ,0≤u≤Bu max ik ‖∇fik(u,W )‖2.
Using the results from Lemmas 1 and 2 and the definition of fik from (6), ‖∇uifik(u,W )‖2 = ‖N ( 1− e−ui − (K − 1)ex > i (wk−wyi )−ui) ) ‖2
= N |1− e−ui(1 + (K − 1)ex > i (wk−wyi ))|
≤ N max{1, (1 + (K − 1)e‖xi‖2(‖wk‖2+‖wyi‖2))− 1} ≤ N max{1, eBu − 1}
and for j indexing either the sampled class k 6= yi or the true label yi, ‖∇wjfik(u,W )‖2 = ‖ ±N(K − 1)ex > i (wk−wyi )−uixi + µβjwj‖2
≤ N(K − 1)e‖xi‖2(‖wk‖2+‖wyi‖2)‖xi‖2 + µβj‖wj‖2 ≤ NeBuBx + µmax
k {βk}BW .
Letting Bf = N max{1, eBu − 1}+ 2(NeBuBx + µmax
k {βk}BW )
we have ‖∇fik(u,W )‖2 ≤ ‖∇uifik(u,W )‖2 + ‖∇wkfik(u,W )‖2 + ‖∇wyi fik(u,W )‖2 = Bf . In conclusion: max
‖W‖22≤B2W ,0≤u≤Bu ‖∇f(u,W )‖2 ≤ max ‖W‖22≤B2W ,ui≤Bu, max ik ‖∇fik(u,W )‖2 ≤ Bf .
B STOCHASTIC COMPOSITION OPTIMIZATION
We can write the equation for L(W ) from (3) as (where we have set µ = 0 for notational simplicity),
L(W ) = − N∑ i=1 log(1 + ∑ k 6=yi ex > i (wk−wyi ))
= Ei[hi(Ek[gk(W )])] where i ∼ unif({1, ..., N}), k ∼ unif({1, ...,K}), hi(v) ∈ R, gk(W ) ∈ RN and
hi(v) = −N log(1 + e>i v)
[gk(W )]i =
{ Kex > i (wk−wyi ) if k 6= yi
0 otherwise .
Here e>i v = vi ∈ R is a variable that is explicitly kept track of with vi ≈ Ek[gk(W )]i =∑ k 6=yi e
x>i (wk−wyi ) (with exact equality in the limit as t → ∞). Clearly vi in stochastic composition optimization has a similar role as ui has in our formulation for f in (5).
If i, k are sampled with k 6= yi in stochastic composition optimization then the updates are of the form (Wang et al., 2016)
wyi = wyi + ηtNK ex > i (zk−zyi )
1 + vi xi
wk = wk − ηtNK ex > i (zk−zyi )
1 + vi xi,
where zk is a smoothed value of wk. These updates have the same numerical instability issues as vanilla SGD on f in (5): it is possible that e x>i zk
1+vi 1 where ideally we should have 0 ≤ e
x>i zk 1+vi ≤ 1.
C U-MAX PSEUDOCODE
Algorithm 1: U-max Input : Data D = {(yi, xi) : yi ∈ {1, . . . ,K}, xi ∈ Rd}Ni=1, number of classes K, number of
datapoints N , learning rate ηt, class sampling probability βk = Nnk+(N−nk)(K−1) , threshold parameter δ > 0, bound BW on W such that ‖W‖2 ≤ BW and bound Bu on u such that ui ≤ Bu for i = 1, ..., N
Output: W
1 Initialize 2 for k = 1 to K do 3 wk ← 0 4 end 5 for i = 1 to N do 6 ui ← log(K) 7 end
8 Run SGD 9 for t = 1 to T do
10 Sample indices 11 i ∼ unif({1, ..., N}) 12 k ∼ unif({1, ...,K} − {yi})
13 Increase ui 14 if ui < log(1 + ex > i (wk−wyi ))− δ then 15 ui ← log(1 + ex > i (wk−wyi ))
16 SGD step 17 wk ← wk − ηt[N(K − 1)ex > i (wk−wyi )−uixi + µβkwk] 18 wyi ← wyi − ηt[−N(K − 1)ex > i (wk−wyi )−uixi + µβyiwyi ] 19 ui ← ui − ηt[N(1− e−ui − (K − 1)ex > i (wk−wyi )−ui)]
20 Projection 21 wk ← wk ·min{1, BW /‖wk‖2} 22 wyi ← wyi ·min{1, BW /‖wyi‖2} 23 ui ← max{0,min{Bu, ui}} 24 end
D PROOF OF CONVERGENCE OF U-MAX METHOD
In this section we will prove the claim made in Theorem 1, that U-max converges to the softmax optimum. Before proving the theorem, we will need a lemma.
Lemma 5. For any δ > 0, if ui ≤ log(1+ex > i (wk−wyi ))−δ then setting ui = log(1+ex > i (wk−wyi )) decreases f(u,W ) by at least δ2/2.
Proof. As in Lemma 3, let θ = (u>, w>1 , ..., w > k ) ∈ RN+KD. Then setting ui = log(1 + ex > i (wk−wyi )) is equivalent to setting θ = θ + ∆ei where ei is the ith canonical basis vector and ∆ = log(1 + ex > i (wk−wyi ))− ui ≥ δ. By a second order Taylor series expansion
f(θ)− f(θ + ∆ei) ≥ ∇f(θ + ∆ei)>ei∆ + ∆2
2 e>i ∇2f(θ + λ∆ei)ei (10)
for some λ ∈ [0, 1]. Since the optimal value of ui for a given value of W is u∗i (W ) = log(1 +∑ k 6=yi e x>i (wk−wyi )) ≥ log(1+ex>i (wk−wyi )), we must have∇f(θ+∆ei)>ei ≤ 0. From Lemma 3
we also know that e>i ∇2f(θ + λ∆ei)ei = exp(−(ui + λ∆)) + ∑ k 6=yi ex > i (wk−wyi )−(ui+λ∆)
= exp(−λ∆)e−ui(1 + ∑ k 6=yi ex > i (wk−wyi )) = exp(−λ∆) exp(−(log(1 + ex > i (wk−wyi ))−∆))(1 +
∑ k 6=yi ex > i (wk−wyi ))
≥ exp(∆− λ∆) ≥ exp(∆−∆) = 1.
Putting in bounds for the gradient and Hessian terms in (10),
f(θ)− f(θ + ∆ei) ≥ ∆2 2 ≥ δ 2 2 .
Now we are in a position to prove Theorem 1.
Proof of Theorem 1. Let θ(t) = (u(t),W (t)) ∈ Θ denote the value of the tth iterate. Here Θ = {θ : ‖W‖22 ≤ B2W , ui ≤ Bu} is a convex set containing the optimal value of f(θ).
Let π(δ)i (θ) denote the operation of setting ui = log(1 + e x>i (wk−wyi )) if ui ≤ log(1 + ex > i (wk−wyi )) − δ. If indices i, k are sampled for the stochastic gradient and ui ≤ log(1 + ex > i (wk−wyi ))− δ, then the value of f at the t+ 1st iterate is bounded as
f(θ(t+1)) = f(πi(θ (t))− ηt∇fik(πi(θ(t))))
≤ f(πi(θ(t))) + max θ∈Θ ‖ηt∇fik(πi(θ))‖2 max θ∈Θ ‖∇f(θ)‖2
≤ f(πi(θ(t))) + ηtB2f ≤ f(θ(t))− δ2/2 + ηtB2f ≤ f(θ(t) − ηt∇fik(θ(t)))− δ2/2 + 2ηtB2f ≤ f(θ(t) − ηt∇fik(θ(t))),
since ηt ≤ δ2/(4B2f ) by assumption. Alternatively if ui ≥ log(1 + ex > i (wk−wyi ))− δ then
f(θ(t+1)) = f(πi(θ (t))− ηt∇fik(πi(θ(t))))
= f(θ(t) − ηt∇fik(θ(t))).
Either way f(θ(t+1)) ≤ f(θ(t) − ηt∇fik(θ(t))). Taking expectations with respect to i, k,
Eik[f(θ(t+1))] ≤ Eik[f(θ(t) − ηt∇fik(θ(t)))].
Finally let P denote the projection of θ onto Θ. Since Θ is a convex set containing the optimum we have f(P (θ)) ≤ f(θ) for any θ, and so
Eik[f(P (θ(t+1)))] ≤ Eik[f(θ(t) − ηt∇fik(θ(t)))],
which shows that the rate of convergence in expectation of U-max is at least as fast as that of standard SGD.
E PROOF OF GENERAL IMPLICIT SGD GRADIENT BOUND
Proof of Theorem 2. Let f(θ, ξ) be m-strongly convex for all ξ. The vanilla SGD step size is ηt‖∇f(θ(t), ξt)‖2 where ηt is the learning rate for the tth iteration. The Implicit SGD step size is ηt‖∇f(θ(t+1), ξt)‖2 where θ(t+1) satisfies θ(t+1) = θ(t) − ηt∇f(θ(t+1), ξt). Rearranging, ∇f(θ(t+1), ξt) = (θ(t)−θ(t+1))/ηt and so it must be the case that∇f(θ(t+1), ξt)>(θ(t)−θ(t+1)) = ‖∇f(θ(t+1), ξt)‖2‖θ(t) − θ(t+1)‖2. Our desired result follows:
‖∇f(θ(t), ξt)‖2 ≥ ∇f(θ(t))>(θ(t) − θ(t+1)) ‖θ(t) − θ(t+1)‖2
≥ ∇f(θ (t+1))>(θ(t) − θ(t+1)) +m‖θ(t) − θ(t+1)‖22
‖θ(t) − θ(t+1)‖2
= ‖∇f(θ(t+1))‖2‖θ(t) − θ(t+1)‖2 +m‖θ(t) − θ(t+1)‖22 ‖θ(t) − θ(t+1)‖2 = ‖∇f(θ(t+1))‖2 +m‖θ(t) − θ(t+1)‖2
where the first inequality is by Cauchy-Schwarz and the second inequality by strong convexity.
F UPDATE EQUATIONS FOR IMPLICIT SGD
In this section we will derive the updates for Implicit SGD. We will first consider the simplest case where only one datapoint (xi, yi) and a single class is sampled in each iteration with no regularizer. Then we will derive the more complicated update for when there are multiple datapoints and sampled classes with a regularizer.
F.1 SINGLE DATAPOINT, SINGLE CLASS, NO REGULARIZER
Equation (6) for the stochastic gradient for a single datapoint and single class with µ = 0 is
fik(u,W ) = N(ui + e −ui + (K − 1)ex > i (wk−wyi )−ui).
The Implicit SGD update corresponds to finding the variables optimizing
min u,W
{ 2ηfik(u,W ) + ‖u− ũ‖22 + ‖W − W̃‖22 } ,
where η is the learning rate and the tilde refers to the value of the old iterate (Toulis et al., 2016, Eq. 6). Since fik is only a function of ui, wk, wyi the optimization reduces to
min ui,wk,wyi
{ 2ηfik(ui, wk, wyi) + (ui − ũi)2 + ‖wyi − w̃yi‖22 + ‖wk − w̃k‖22 } = min ui,wk,wyi { 2ηN(ui + e −ui + (K − 1)ex > i (wk−wyi )−ui)
+ (ui − ũi)2 + ‖wyi − w̃yi‖22 + ‖wk − w̃k‖22 } .
The optimal value of wk, wyi must deviate from the old value w̃k, w̃yi in the direction of xi. Furthermore we can observe that the deviation of wk must be exactly opposite that of wyi , that is:
wyi = w̃yi + a xi
2‖xi‖22 wk = w̃k − a
xi 2‖xi‖22
(11)
for some a ≥ 0. The optimization problem reduces to
min ui,a≥0
{ 2ηN(ui + e −ui + (K − 1)ex > i (w̃k−w̃yi )e−a−ui) + (ui − ũi)2 + a2 1
2‖xi‖22
} . (12)
We’ll approach this optimization problem by first solving for a as a function of ui and then optimize over ui. Once the optimal value of ui has been found, we can calculate the corresponding optimal value of a. Finally, substituting a into (11) will give us our updated value of W .
Solving for a
We solve for a by setting its derivative equal to zero in (12)
0 = ∂a { 2ηN(ui + e −ui + (K − 1)ex > i (w̃k−w̃yi )e−a−ui) + (ui − ũi)2 + a2 1
2‖xi‖22 } = −2ηN(K − 1)ex > i (w̃k−w̃yi )−uie−a + a 1
‖xi‖22 ⇔ aea = 2ηN(K − 1)‖xi‖22ex > i (w̃k−w̃yi )−ui . (13)
The solution for a can be written in terms of the principle branch of the Lambert W function P ,
a(ui) = P (2ηN(K − 1)‖xi‖22ex > i (w̃k−w̃yi )−ui)
= P (ex > i (w̃k−w̃yi )−ui+log(2ηN(K−1)‖xi‖ 2 2)). (14)
Substituting the solution to a(ui) into (12), we now only need minimize over ui:
min ui
{ 2ηNui+ 2ηNe −ui + 2ηN(K − 1)ex > i (w̃k−w̃yi )e−a(ui)−ui + (ui − ũi)2+ a(ui)2 1
2‖xi‖22 } = min
ui
{ 2ηNui + 2ηNe −ui + a(ui)‖xi‖−22 + (ui − ũi)2 + a(ui)2 1
2‖xi‖22
} (15)
where we used the fact that e−P (z) = P (z)/z. The derivative with respect to ui in (15) is
∂ui { 2ηNui + 2ηNe −ui + a(ui)‖xi‖−22 + (ui − ũi)2 + a(ui)2 1
2‖xi‖22 } = 2ηN − 2ηNe−ui + ∂uia(ui)‖xi‖−22 + 2(ui − ũi) + 2a(ui)∂uia(ui) 1
2‖xi‖22
= 2ηN − 2ηNe−ui − a(ui) 1 + a(ui) ‖xi‖−22 + 2(ui − ũi)− a(ui)
2
(1 + a(ui))‖xi‖22 (16)
where to calculate ∂uia(ui) we used the fact that ∂zP (z) = P (z) z(1+P (z)) and so
∂uia(ui) = − a(ui)
ex > i (w̃k−w̃yi )−ui+log(2ηN(K−1)‖xi‖ 2 2)(1 + a(ui))
ex > i (w̃k−w̃yi )−ui+log(2ηN(K−1)‖xi‖ 2 2)
= − a(ui) 1 + a(ui) .
Bisection method for ui
We can solve for ui using the bisection method. Below we show how to calculate the initial lower and upper bounds of the bisection interval and prove that the size of the interval is bounded (which ensures fast convergence).
Start by calculating the derivative in (16) at ui = ũi. If the derivative is negative then the optimal ui is lower bounded by ũi. An upper bound is provided by
ui = argmin ui
{ 2ηN(ui + e −ui + (K − 1)ex > i (w̃k−w̃yi )e−a(ui)−ui) + (ui − ũi)2 + a(ui) 2
2‖xi‖22 } ≤ argmin
ui
{ 2ηN(ui + e −ui + (K − 1)ex > i (w̃k−w̃yi )e−ui) + (ui − ũi)2 } ≤ argmin
ui
{ 2ηN(ui + e −ui + (K − 1)ex > i (w̃k−w̃yi )e−ui) } = log(1 + (K − 1)ex > i (w̃k−w̃yi )).
In the first inequality we set a(ui) = 0, since by the envelop theorem the gradient of ui is monotonically increasing in a. In the second inequality we used the assumption that ui is lower bounded by ũi. Thus if the derivative in (16) is negative at ui = ũi then ũi ≤ ui ≤ log(1+(K−1)ex > i (w̃k−w̃yi )). If (K− 1)ex>i (w̃k−w̃yi ) ≤ 1 then the size of the interval must be less than log(2), since ũi ≥ 0. Otherwise the gap must be at most log(2(K−1)ex>i (w̃k−w̃yi ))−ũi = log(2(K−1))+x>i (w̃k−w̃yi)−ũi. Either way, the gap is upper bounded by log(2(K − 1)) + |x>i (w̃k − w̃yi)− ũi|. Now let us consider if the derivative in (16) is positive at ui = ũi. Then ui is upper bounded by ũi. Denoting a′ as the optimal value of a, we can lower bound ui using (12)
ui = argmin ui
{ 2ηN(ui + e −ui + (K − 1)ex > i (w̃k−w̃yi )e−a ′−ui) + (ui − ũi)2 }
≥ argmin ui
{ ui + e −ui + (K − 1)ex > i (w̃k−w̃yi )e−a ′−ui }
= log(1 + (K − 1) exp(x>i (w̃k − w̃yi)− a′)) ≥ log(K − 1) + x>i (w̃k − w̃yi)− a′ (17)
where the first inequality comes dropping the (ui − ũi)2 term due to the assumption that ui < ũi. Recall (13),
a′ea ′ = 2ηN(K − 1)‖xi‖22ex > i (w̃k−w̃yi )−ui .
The solution for a′ is strictly monotonically increasing as a function of the right side of the equation. Thus replacing the right side with an upper bound on its value results in an upper bound on a′. Substituting the bound for ui,
a′ ≤ min{a : aea = 2ηN(K − 1)‖xi‖22ex > i (w̃k−w̃yi )−(log(K−1)+x > i (w̃k−w̃yi )−a)}
= min{a : a = 2ηN‖xi‖22} = 2ηN‖xi‖22. (18)
Substituting this bound for a′ into (17) yields
ui ≥ log(K − 1) + x>i (w̃k − w̃yi)− 2ηN‖xi‖22. Thus if the derivative in (16) is postive at ui = ũi then log(K− 1) +x>i (w̃k− w̃yi)− 2ηN‖xi‖22 ≤ ui ≤ ũi. The gap between the upper and lower bound is ũi−x>i (w̃k−w̃yi)+2ηN‖xi‖22−log(K−1). In summary, for both cases of the sign of the derivative in (16) at ui = ũi we are able to calculate a lower and upper bound on the optimal value of ui such that the gap between the bounds is at most |ũi − x>i (w̃k − w̃yi)| + 2ηN‖xi‖22 + log(K − 1). This allows us to perform the bisection method where for > 0 level accuracy we require only log2(
−1)+log2(|ũi−x>i (w̃k−w̃yi)|+2ηN‖xi‖22+ log(K − 1)) function evaluations.
F.2 BOUND ON STEP SIZE
Here we will prove that the step size magnitude of Implicit SGD with a single datapoint and sampled class with respect to w is bounded as O(x>i (w̃k − w̃yi)− ũi). We will do so by considering the two cases u′i ≥ ũi and u′i < ũi separately, where u′i denotes the optimal value of ui in the Implicit SGD update and ũi is its value at the previous iterate.
Case: u′i ≥ ũi
Let a′ denote the optimal value of a in the Implicit SGD update. From (14)
a′ = a(u′i)
= P (ex > i (w̃k−w̃yi )−u ′ i+log(2ηN(K−1)‖xi‖ 2 2))
= P (ex > i (w̃k−w̃yi )−ũi+log(2ηN(K−1)‖xi‖ 2 2)).
Now using the fact that P (z) = O(log(z)),
a′ = O(x>i (w̃k − w̃yi)− ũi + log(2ηN(K − 1)‖xi‖22)) = O(x>i (w̃k − w̃yi)− ũi)
Case: u′i < ũi
If u′i < ũi then we can lower bound a ′ from (18) as
a′ ≤ 2ηN‖xi‖22.
Combining cases
Putting together the two cases,
a′ = O(max{x>i (w̃k − w̃yi)− ũi, 2ηN‖xi‖22}) = O(x>i (w̃k − w̃yi)− ũi).
The actual step size in w is ±a xi 2‖xi‖22 . Since a is O(x>i (w̃k − w̃yi)− ũi), the step size magnitude is also O(x>i (w̃k − w̃yi)− ũi).
F.3 MULTIPLE DATAPOINTS, MULTIPLE CLASSES
The Implicit SGD update when there are multiple datapoints, multiple classes, with a regularizer is similar to the singe datapoint, singe class, no regularizer case described above. However, there are a few significant differences. Firstly, we will require some pre-computation to find a low-dimensional representation of the x values in each mini-batch. Secondly, we will integrate out ui for each datapoint (not wk). And thirdly, since the dimensionality of the simplified optimization problem is large, we’ll require first order or quasi-Newton methods to find the optimal solution.
F.3.1 DEFINING THE MINI-BATCH
The first step is to define our mini-batches of size n. We will do this by partitioning the datapoint indices into sets S1, ..., SJ with Sj = {j` : ` = 1, ..., n} for j = 1, ..., bN/nc, SJ = {J` : ` = 1, ..., N mod n}, Si ∩ Sj = ∅ and ∪Jj=1Sj = {1, ..., N}.
Next we define the set of classes Cj which can be sampled for the jth mini-batch. The set Cj is defined to be all sets of m distinct classes that are not equal to any of the labels y for points in the mini-batch, that is, Cj = {(k1, ..., km) : ki ∈ {1, ...,K}, ki 6= k` ∀` ∈ {1, ...,m} − {i}, ki 6= y` ∀` ∈ Sj}. Now we can write down our objective from (5) in terms of an expectation of functions corresponding to our mini-batches:
f(u,W ) = E[fj,C(u,W )]
where j is sampled with probability pj = |Sj |/N and C is sampled uniformly from Cj and
fj,C(u,W ) = p −1 j ∑ i∈Sj ui + e−ui + ∑ k∈Sj−{i} ex > i (wk−wyi )−ui + K − n m ∑ k∈C ex > i (wk−wyi )−ui + µ
2 ∑ k∈C∪Sj βk‖wk‖22.
The value of the regularizing constant βk is such that E[I[k ∈ C ∪ Sj ]βk] = 1, which requires that
β−1k = 1− 1
J J∑ j=1 I[k 6= Sj ](1− m K − |Sj | ).
F.3.2 SIMPLIFYING THE IMPLICIT SGD UPDATE EQUATION
The Implicit SGD update corresponds to solving
min u,W
{ 2ηfj,C(u,W ) + ‖u− ũ‖22 + ‖W − W̃‖22 } ,
where η is the learning rate and the tilde refers to the value of the old iterate (Toulis et al., 2016, Eq. 6). Since fj,C is only a function of uSj = {ui : i ∈ Sj} and Wj,C = {wk : k ∈ Sj ∪ C} the optimization reduces to
min uSj ,Wj,C
{ 2ηfj,C(uSj ,Wj,C) + ‖uSj − ũSj‖22 + ‖Wj,C − W̃j,C‖22 } .
The next step is to analytically minimize the uSj terms. The optimization problem in (21) decomposes into a sum of separate optimization problems in ui for i ∈ Sj ,
min ui
{ 2ηp−1j (ui + e −uidi) + (ui − ũi)2 }
where
di(Wj,C) = 1 + ∑
k∈Sj−{i}
ex > i (wk−wyi ) + K − n m ∑ k∈C ex > i (wk−wyi ).
Setting the derivative of ui equal to zero yields the solution
ui(Wj,C) = ũi − ηp−1j + P (ηp −1 j di(Wj,C) exp(ηp −1 j − ũi))
where P is the principle branch of the Lambert W function. Substituting this solution into our optimization problem and simplifying yields
min Wj,C {∑ i∈Sj (1 + P (ηp−1j di(Wj,C) exp(ηp −1 j − ũi))) 2 + ‖Wj,C − W̃j,C‖22 + µ 2 ∑ k∈C∪Sj βk‖wk‖22 } ,
(19)
where we have used the identity e−P (z) = P (z)/z. We can decompose (19) into two parts by splitting Wj,C = W ‖ j,C + W ⊥ j,C , its components parallel and perpendicular to the span of {xi : i ∈ Sj} respectively. Since the leading term in (19) only depends on W ‖j,C , the two resulting sub-problems are
min W ‖ j,C {∑ i∈Sj (1 + P (ηp−1j di(W ‖ j,C) exp(ηp −1 j − ũi))) 2 + ‖W ‖j,C − W̃ ‖ j,C‖ 2 2 + µ 2 ∑ k∈C∪Sj βk‖w‖k‖ 2 2 } ,
min W⊥j,C
{ ‖W⊥j,C − W̃⊥j,C‖22 + µ
2 ∑ k∈C∪Sj βk‖w⊥k ‖22 }
(20)
Let us focus on the perpendicular component first. Simple calculus yields the optimal value w⊥k = 1
1+µβk/2 w̃⊥k for k ∈ Sj ∪ C.
Moving onto the parallel component, let the span of {xi : i ∈ Sj} have an orthonormal basis11 Vj = (vj1, ..., vjn) ∈ RD×n with xi = Vjbi for some bi ∈ Rn. With this basis we can write w ‖ k = w̃ ‖ k + Vjak for ak ∈ Rn which reduces the parallel component optimization problem to12
min Aj,C {∑ i∈Sj (1 + P (zijC(Aj,C))) 2 + ∑ k∈Sj∪C (1 + µβk 2 )‖ak‖22 + µβkw̃>k Vjak } , (21)
where Aj,C = {ak : k ∈ Sj ∪ C} ∈ R(n+m)×n and
zijC(Aj,C) = ηp −1 j exp(ηp −1 j ) ( exp(−ũi) + ∑ k∈Sj−{i} ex > i (w̃k−w̃yi )−ũieb > i (ak−ayi )
+ K − n m ∑ k∈C ex > i (w̃k−w̃yi )−ũieb > i (ak−ayi ) ) .
11We have assumed here that dim(span({xi : i ∈ Sj})) = n, which will be most often the case. If the dimension of the span is lower than n then let Vj be of dimension D × dim(span({xi : i ∈ Sj})).
12Note that we have used w̃k instead of w̃ ‖ k in writing the parallel component optimization problem. This does not make a difference as w̃k always appears as an inner product with a vector in the span of {xi : i ∈ Sj}.
The eb > i (ak−ayi ) factors come from
x>i wk = x > i (w̃ ‖ k + a > k Vj)
= x>i w̃k + (Vjbi) >Vjak = x>i w̃k + b > i V > j Vjak = x>i w̃k + b > i ak,
since Vj is an orthonormal basis.
F.3.3 OPTIMIZING THE IMPLICIT SGD UPDATE EQUATION
To optimize (21) we need to be able to take the derivative:
∇a` ∑ i∈Sj (1 + P (zijC(Aj,C))) 2 + ∑ k∈Sj∪C (1 + µβk 2 )‖ak‖22 + µβkw̃>k Vjak = ∑ i∈Sj 2(1 + P (zijC(Aj,C)))∂zijC(Aj,C)P (zijC(Aj,C))∇a`zijC(Aj,C)
+ (2 + µβ`)a` + µβ`w̃ > ` Vj
= ∑ i∈Sj 2(1 + P (zijC(Aj,C))) P (zijC(Aj,C)) zijC(Aj,C)(1 + P (zijC(Aj,C))) ∇a`zijC(Aj,C)
+ (2 + µβ`)a` + µβ`w̃ > ` Vj
= ∑ i∈Sj 2 P (zijC(Aj,C)) zijC(Aj,C) ∇a`zijC(Aj,C) + (2 + µβ`)a` + µβ`w̃>` Vj
= ∑ i∈Sj 2e−P (zijC(Aj,C))∇a`zijC(Aj,C) + (2 + µβ`)a` + µβ`w̃>` Vj
where we used that ∂zP (z) = P (z) z(1+P (z)) and e −P (z) = P (z)/z. To complete the calculation of the derivate we need,
∇a`zijC(Aj,C) = ∇a`ηp −1 j exp(ηp −1 j ) ( exp(−ũi) + ∑ k∈Sj−{i} ex > i (w̃`−w̃yi )−ũieb > i (a`−ayi )
+ K − n m ∑ k∈C ex > i (w̃`−w̃yi )−ũieb > i (a`−ayi ) ) = ηp−1j exp(ηp −1 j )bi
· ( I[` ∈ Sj − {i}]ex > i (w̃`−w̃yi )−ũieb > i (a`−ayi )
+ I[` ∈ C]K − n m ex > i (w̃`−w̃yi )−ũieb > i (a`−ayi )
− I[` = yi] ( ∑ k∈Sj−{i} ex > i (w̃`−w̃yi )−ũieb > i (a`−ayi )
+ K − n m ∑ k∈C ex > i (w̃`−w̃yi )−ũieb > i (a`−ayi ) )) .
In order to calculate the full derivate with respect to Aj,C we need to calculate b>i ak for all i ∈ Sj and k ∈ Sj ∪ C. This is a total of n(n + m) inner products of n-dimensional vectors, costing O(n2(n + m)). To find the optimum of (21) we can use any optimization procedure that only uses gradients. Since (21) is strongly convex, standard first order methods can solve to accuracy in O(log( −1)) iterations (Boyd & Vandenberghe, 2004, Sec. 9.3). Thus once we can calculate all of the terms in (21), we can solve it to accuracy in runtime O(n2(n+m) log( −1)).
Once we have solved for Aj,C , we can reconstruct the optimal solution for the parallel component of wk as w ‖ k = w̃ ‖ k + Vjak. Recall that the solution to the perpendicular component is w ⊥ k =
1 1+µβk/2 w̃⊥k . Thus our optimal solution is wk = w̃ ‖ k + Vjak + 1 1+µβk/2 w̃⊥k .
If the features xi are sparse, then we’d prefer to do a sparse update to w, saving computation time. We can achieve this by letting
wk = γk · rk
where γk is a scalar and rk a vector. Updating wk = w̃ ‖ k + Vjak + 1 1+µβk/2 w̃⊥k is equivalent to
γk = γ̃k · 1
1 + µβk/2
rk = r̃k + µβk/2 · r̃‖k + γ̃ −1 k (1 + µβk/2) · Vjak.
Since we only update rk along the span of {xi : i ∈ Sj}, its update is sparse.
F.3.4 RUNTIME
There are two major tasks in calculating the terms in (21). The first is to calculate x>i w̃k for i ∈ Sj and k ∈ Sj ∪ C. There are a total of n(n + m) inner products of D-dimensional vectors, costing O(n(n + m)D). The other task is to find the orthonormal basis Vj of {xi : i ∈ Sj}, which can be achieved using the Gram-Schmidt process in O(n2D). We’ll assume that {Vj : j = 1, ..., J} is computed only once as a pre-processing step when defining the mini-batches. It is exactly because calculating {Vj : j = 1, ..., J} is expensive that we have fixed mini-batches that do not change during the optimization routine.
Adding the cost of calculating the x>i w̃k inner products to the costing of optimizing (21) leads to the claim that solve the Implicit SGD update formula to accuracy in runtime O(n(n+m)D+n2(n+ m) log( −1)) = O(n(n+m)(D + n log( −1))).
F.3.5 INITIALIZING THE IMPLICIT SGD OPTIMIZER
As was the case in Section F.1, it is important to initialize the optimization procedure at a point where the gradient is relatively small and can be computed without numerical issues. These numerical issues arise when an exponent x>i (w̃k − w̃yi) − ũi + b>i (ak − ayi) 0. To ensure that this does not occur for our initial point, we can solve the following linear problem,13
R = min Aj,C ∑ k∈C∪Sj ‖ak‖1
s.t. x>i (w̃k − w̃yi)− ũi + b>i (ak − ayi) ≤ 0 ∀i ∈ Sj , k ∈ C ∪ Sj (22)
Note that if k = yi then the constraint 0 ≥ x>i (w̃k−w̃yi)−ũi+b>i (ak−ayi) = −ũi is automatically fulfilled since ũi ≥ 0. Also observed that setting ak = −V >j w̃k satisfies all of the constraints, and so
R ≤ ∑
k∈C∪Sj
‖V >j w̃k‖1 ≤ (n+m) max k∈C∪Sj ‖V >j w̃k‖1.
We can use the solution to (22) to gives us an upper bound on (21). Consider the optimal value A
(R) j,C of the linear program in (22) with the value of the minimum being R. Since A (R) j,C satisfies
the constrain in (22) we have zijC(A (R) j,C ) ≤ Kηp −1 j exp(ηp −1 j ). Since P (z) is a monotonically increasing function that is non-negative for z ≥ 0 we also have (1 + P (zijC(A(R)j,C )))2 ≥ (1 + P (Kηp−1j exp(ηp −1 j ))) 2. Turning to the norms, we can use the fact that ‖a‖2 ≤ ‖a‖1 for any
13Instead bounding the constraints on the right with 0, we could also have used any small positive number, like 5.
vector a to bound∑ k∈Sj∪C (1 + µβk 2 )‖ak‖22 + µβkw̃>k Vjak
≤ ∑
k∈Sj∪C
(1 + µβk
2 )‖ak‖21 + µβk‖w̃>k Vj‖1‖ak‖1
≤ (
1 + µ · max k∈Sj∪C {βk}/2 ) ∑ k∈Sj∪C ‖ak‖21 + µ max k∈Sj∪C {βk‖w̃>k Vj‖1} ∑ k∈Sj∪C ‖ak‖1
≤ (
1 + µ · max k∈Sj∪C
{βk}/2 ) R2 + µ max
k∈Sj∪C {βk} max k∈Sj∪C {‖w̃>k Vj‖1}R
≤ (
1 + µ · max k∈Sj∪C
{βk}/2 )(
(n+m) max k∈C∪Sj
‖V >j w̃k‖1 )2
+ µ max k∈Sj∪C {βk} max k∈Sj∪C
{‖w̃>k Vj‖1} (
(n+m) max k∈C∪Sj
‖V >j w̃k‖1 )
≤ (1 + µ · max k∈Sj∪C {βk})(n+m)2 max k∈C∪Sj ‖V >j w̃k‖21
≤ (1 + µ · max k∈Sj∪C {βk})(n+m)2 max k∈C∪Sj ‖w̃k‖21.
Putting the bounds together we have that the optimal value of (21) is upper bounded by its value at the solution to (22), which in turn is upper bounded by
n(1 + P (Kηp−1j exp(ηp −1 j ))) 2 + (1 + µ · max k∈Sj∪C {βk})(n+m)2 max k∈C∪Sj ‖w̃k‖21.
This bound is guarantees that our initial iterate will be numerically stable.
G LEARNING RATE PREDICTION AND LOSS
Here we present the results of using different learning rates for each algorithm applied to the Eurlex dataset. In addition to the Implicit SGD, NCE, IS, OVE and U-max algorithms, we also provide results for NCE with n = 1,m = 1, denoted as NCE (1,1) . NCE and NCE (1,1) have near identical performance. | 1. What is the focus of the paper regarding multi-class classification?
2. What is the key idea proposed in the paper for solving the problem?
3. What are the main contributions of the paper, particularly in terms of numerical stability and implicit SGD?
4. Do you have any concerns or criticisms regarding the paper's approach or claims?
5. How do you assess the significance and novelty of the paper's content? | Review | Review
The paper develops an interesting approach for solving multi-class classification with softmax loss.
The key idea is to reformulate the problem as a convex minimization of a "double-sum" structure via a simple conjugation trick. SGD is applied to the reformulation: in each step samples a subset of the training samples and labels, which appear both in the double sum. The main contributions of this paper are: "U-max" idea (for numerical stability reasons) and an ""proposing an "implicit SGD" idea.
Unlike the first review, I see what the term "exact" in the title is supposed to mean. I believe this was explained in the paper. I agree with the second reviewer that the approach is interesting. However, I also agree with the criticism (double sum formulations exist in the literature; comments about experiments); and will not repeat it here. I will stress though that the statement about Newton in the paper is not justified. Newton method does not converge globally with linear rate. Cubic regularisation is needed for global convergence. Local rate is quadratic.
I believe the paper could warrant acceptance if all criticism raised by reviewer 2 is addressed.
I apologise for short and late review: I got access to the paper only after the original review deadline. |
ICLR | Title
Training Group Orthogonal Neural Networks with Privileged Information
Abstract
Learning rich and diverse feature representation are always desired for deep convolutional neural networks (CNNs). Besides, when auxiliary annotations are available for specific data, simply ignoring them would be a great waste. In this paper, we incorporate these auxiliary annotations as privileged information and propose a novel CNN model that is able to maximize inherent diversity of a CNN model such that the model can learn better feature representation with a stronger generalization ability. More specifically, we propose a group orthogonal convolutional neural network (GoCNN) to learn features from foreground and background in an orthogonal way by exploiting privileged information for optimization, which automatically emphasizes feature diversity within a single model. Experiments on two benchmark datasets, ImageNet and PASCAL VOC, well demonstrate the effectiveness and high generalization ability of our proposed GoCNN models.
N/A
Learning rich and diverse feature representation are always desired for deep convolutional neural networks (CNNs). Besides, when auxiliary annotations are available for specific data, simply ignoring them would be a great waste. In this paper, we incorporate these auxiliary annotations as privileged information and propose a novel CNN model that is able to maximize inherent diversity of a CNN model such that the model can learn better feature representation with a stronger generalization ability. More specifically, we propose a group orthogonal convolutional neural network (GoCNN) to learn features from foreground and background in an orthogonal way by exploiting privileged information for optimization, which automatically emphasizes feature diversity within a single model. Experiments on two benchmark datasets, ImageNet and PASCAL VOC, well demonstrate the effectiveness and high generalization ability of our proposed GoCNN models.
1 INTRODUCTION
Deep convolutional neural networks (CNNs) have brought a series of breakthroughs in image classification tasks (He et al., 2015; Girshick, 2015; Zheng et al., 2015). Many recent works (Simonyan & Zisserman, 2014; He et al., 2015; Krizhevsky et al., 2012) have observed that CNNs with different architectures or even different weight initializations may learn slightly different feature representations. Combining these heterogeneous models can provide richer and more diverse feature representation which can further boost the final performance. Such observation motivate us to directly pursue feature diversity within a single model in the work.
Besides, many existing datasets (Everingham et al., 2010; Deng et al., 2009; Xiao et al., 2010) provide more than one types of annotations. For example, the PASCAL VOC (Everingham et al., 2010) provides image level tags, object bounding box, and image segmentation masks; the ImageNet dataset (Deng et al., 2009) provide image level tags and a small portion of bounding box. Only using the image level tags for training image classification model would be a great waste on the other annotation resources. Therefore, in this work, we investigate whether these auxiliary annotations could also help a CNN model learn richer and more diverse feature representation.
In particular, we take advantage of these extra annotated information during training a CNN model for obtaining a single CNN model with sufficient inherent diversity, with the expectation that the model is able to learn more diverse feature representations and offers stronger generalization ability for image classification than vanilla CNNs. We therefore propose a group orthogonal convolutional neural network (GoCNN) model that is able to exploit these extra annotated information as privileged information. The idea is to learn different groups of convolutional functions which are “orthogonal” to the ones in other groups. Here by “orthogonal”, we mean there is no significant correlation among the produced features. By “privileged information”, we mean these auxiliary information only been used during the training phase. Optimizing orthogonality among convolutional functions reduces redundancy and increases diversity within the architecture.
Properly defining the groups of convolutional functions in the GoCNN is not an easy task. In this work, we propose to exploit available privileged information for identifying the proper groups. Specifically, in the context of image classification, object segmentation annotations which are (partially) available in several public datasets give richer information.
In addition, the background contents are usually independent on foreground objects within an image. Thus, splitting convolutional functions into different groups and enforcing them to learn features from the foreground and background separately can help construct orthogonal groups with small correlations. Motivated by this, we introduce the GoCNN architecture which explores to learn discriminative features from foreground and background separately where the foreground-background segregation is offered by the privileged segmentation annotation for training GoCNN. In this way, inherent diversity of the GoCNN can be explicitly enhanced. Moreover, benefiting from pursuing the group orthogonality, the learned convolutional functions within GoCNN are demonstrated to be foreground and background diagnostic even for extracting features from new images in the testing phase.
To the best of our knowledge, this work is the first to explore a principled way to train a deep neural network with desired inherent diversity and the first to investigate how to use the segmentation privileged information to assist image classification within a deep learning architecture. Experiments on ImageNet and PASCAL VOC clearly demonstrate GoCNN improves upon vanilla CNN models significantly, in terms of classification accuracy.
As a by-product of implementing GoCNN, we also provide positive answers to the following two prominent questions about image classification: (1) Does background information indeed help object recognition in deep learning? (2) Can a more precise annotation with richer information, e.g., segmentation annotation, assist the image classification training process non-trivially?
2 RELATED WORK
Learning rich and diverse feature representations is always desired while training CNNs for gaining stronger generalization ability. However, most existing works mainly focus on introducing handcrafted cost functions to implicitly pursue diversity (Tang, 2013), or modifying activation functions to increase model non-linearity (Jin et al., 2015) or constructing a more complex CNN architecture (Simonyan & Zisserman, 2014; He et al., 2015; Krizhevsky et al., 2012). Methods that explicitly encourage inherent diversity of CNN models are still rare so far.
Knowledge distillation (Hinton et al., 2015) can be seen as an effective way to learn more discriminative and diverse feature representations. The distillation process compresses knowledge and thus encourages a weak model to learn more diverse and discriminative features. However, knowledge distillation works in two stages which are isolated from each other and has to rely on pre-training a complicated teacher network model. This may introduce undesired computation overhead. In contrast, our proposed approach can learn a diverse network in a single stage without requiring an extra network model. Similar works, e.g. the Diversity Networks (Sra & Hosseini), also squeeze the knowledge by preserving the most diverse features to avoid harming the performance.
More recently, Cogswell et al. (2016) proposed the DeCov approach to reduce over-fitting risk of a deep neural network model by reducing feature covariance. DeCov also agrees with increasing generalization ability of a model by pursuing feature diversity. This is consistent with our motivation. However, DeCov penalizes the covariance in an unsupervised fashion and cannot utilize extra available annotations, leading to insignificant performance improvement over vanilla models (Cogswell et al., 2016).
Using privileged information to learn better features during the training process is similar in spirit with our method. Both our proposed method and Lapin et al. (2014) introduce privileged information to assist the training process. However, almost all existing works (Lapin et al., 2014; Lopez-Paz et al., 2016; Sharmanska et al., 2014) are based on SVM+ which only focuses on training a better classifier and is not able to do the end-to-end training for better features.
Several works (Andrew et al., 2013; Srivastava & Salakhutdinov, 2012) about canonical correlation analysis (CCA) for CNNs provide a way to constrain feature diversity. However, the goal of CCA
is to find linear projections for two random vectors that are maximally correlated, which is different from ours.
It is also worth to notice that simply adding a segmentation loss to image classification neural network is not equivalent to a GoCNN model. This is because image segmentation requires each pixel within the target area to be activated and the others stay silent for dense prediction, while GoCNN does not require the each pixel within the target area to be activated. GoCNN is specifically designed for classification tasks, not for segmentation ones. Moreover, our proposed GoCNN supports learning from partial privileged information wile the CNN above needs a fully annotated training set.
3 MODEL DIVERSITY OF CONVOLUTIONAL NEURAL NETWORKS
Throughout the paper, we use f(·) to denote a convolutional function (or filter) and k to index the layers in a multi-layer network. We use c(k) to denote the total number of convolutional functions at the k-th layer and use i and j to index different functions, i.e., f (k)i (·) denotes the i-th convolutional function at the k-th layer of the network. The function f maps an input feature map to another new feature map. The height and the width of a feature map output at the layer k are denoted as h(k) and w(k) respectively. We consider a network model consisting of N layers in total.
Under a standard CNN architecture, the elements within the same feature map are produced by the same convolutional function f (k)i and thus they represent the same type of features across different locations. Therefore, encouraging the feature variance or diversity within a single feature map does not make sense. In this work, our target is to enhance the diversity among different convolutional functions. Here we first give a formal description of model diversity for an N -layer CNN.
Definition 1 (Model Diversity). Let f (k)i denote the i-th convolutional function at the k-th layer of a neural network model, and then the model diversity of the k-th layer is defined as
ζ(k) , 1− 1 c(k) 2 c(k)∑ i,j=1 cor ( f (k) i , f (k) j ) . (1)
Here the operator cor(·, ·) denotes the statistical correlation.
In other words, the inherent diversity of a network model that we are going to maximize is evaluated across all the convolutional functions within the same layer.
The most straightforward way to maximize the above diversity for each layer is to directly maximize the quantity of ζ(k) during training the network. However, it is quite involved to optimize the hard diversity in (1) due to the large combination number of different convolutional functions. Thus, we propose to solve this problem by learning the convolutional functions in different groups separately. Different functions from different groups are uncorrelated to each other and we do not need to consider their correlation. Suppose the convolutional functions at each layer are partitioned into m different groups, denoted as G = {G1, . . . , Gm}. Then, we instead maximize the following Group-wise Model Diversity.
Definition 2 (Group-wise Model Diversity). Given a pre-defined group partition set G = {G1, . . . , Gm} of convolutional functions at a specific layer, the group-wise model diversity of this layer is defined as
ζ(k)g , 1− 1
c(k) 2 |G|∑ s,t=1 ∑ i∈Gs,j∈Gt cor ( f (k) i , f (k) j ) .
Instead of directly optimizing the model diversity, we consider optimizing the group-wise model diversity by finding a set of orthogonal groups {G∗1, . . . , G∗m}, where convolutional functions within each group are uncorrelated with others within different groups. In the scenario of image representation learning, one typical example of such orthogonal groups is the foreground group and background group pair — partitioning the functions into two groups and letting them learn features from foreground and background contents respectively.
In this work, we use segmentation annotation as privileged information for finding orthogonal groups of convolutional functions G∗1, . . . , G ∗ m. In particular, we derive the foreground and background segregation from the privileged information for an image. Then we partition convolutional functions at a specific layer of a CNN model into foreground and background groups respectively, and train a GoCNN model to learn the foreground and background features separately. Details about the architecture of the GoCNN and the training procedure of GoCNN are given in the following section.
4 GROUP ORTHOGONAL CONVOLUTIONAL NEURAL NETWORKS
We introduce the group orthogonal constraint to maximize group-wise diversity among different groups of convolutional functions explicitly by constructing a group orthogonal convolutional neural network (GoCNN). Details on the architecture of GoCNN are shown in Figure 1. GoCNN is built upon a standard CNN architecture. The convolutional functions at the final convolution layer are explicitly divided into two groups: the foreground group which concentrates on learning the foreground feature and the background group which learns the background feature. The output features of these two groups are then aggregated by a fully connected layer.
In the following subsections, we give more details of the foreground and background groups construction. After that, we will describe how to combine these two components and build them into a unified network architecture — the GoGNN.
4.1 FOREGROUND AND BACKGROUND GROUPS
To learn convolutional functions that are specific for foreground content of an image, we propose the following two constraints for the foreground group of functions. The first constraint forces the functions to be learned from the foreground only and free of any contamination from the background, and the second constraint encourages the learned functions to be discriminative for image classification.
We learn features that only lie in the foreground by suppressing any contamination from the background. As aforementioned, here we use the object segmentation annotations (denoted as Mask) as the privileged information in the training phase to help identify the background features where the foreground convolutional functions should not respond to. The background contamination is extracted by an extractor adopted on each feature map within the foreground group. In particular, we define an extractor ϕ(·, ·) as follows:
ϕ(f (k) i (x),Mask) , f (k) i (x) Mask, (2)
where x denotes the raw input and denotes the element-wise multiplication. In the above operator, we use the background object mask Maskb to extract background features. Each element in Maskb is equal to one if the corresponding position lies on a background object or zero otherwise. Here, we assume the masks are already re-sized to have compatible dimensionality with the output feature map f (k)i (x) by the interpolation method so that the element-wise multiplication is valid here. The extracted background features are then suppressed by a regression loss defined as follows:
min θ ∑ i ‖ϕ(f (k)i (x; θ),Maskb)‖F . (3)
Here θ parameterizes the convolution function f (k)i . Since the target value for this regression is zero, we also call it a suppression term. It will only suppress the response output by f (k)i at the locations outside the foreground.
For the second constraint, i.e., encouraging the functions to learn discriminative features, we simply use the standard softmax classification loss to supervise the learning phase.
The role of the background group is complementary to the foreground one. It aims to learn convolutional functions that are only specific for background contents. Thus, the functions within the background group have a same suppression term as in Eqn. (3), in which Maskb is replaced with Maskf to restrict the learned features to make them only lie in the background space. The Maskf is simply computed as Maskf = 1 − Maskb. Also, a softmax linear classifier is attached during training to guarantee that these learned background functions are useful for predicting image categories.
4.2 ARCHITECTURE AND IMPLEMENTATION DETAILS OF THE GOCNN
In GoCNN, the size ratio of foreground group and background group is fixed to be 3:1 during training, since intuitively the foreground contents are much more informative than the background contents in classifying images. A single fully connected layer (or multiple layers depending on the basic CNN architecture) is used to unify the functional learning within different groups and combine features learned from different groups. It aggregates the information from different feature spaces and produces the final image category prediction. More details are given in Figure 1.
Because we are dealing with the classification problem, a main classifier with a standard classification loss function is adopted at the top layer of GoCNN. In our experiments, the standard softmax loss is used for single-label image classification and the logistic regression loss is used for multiplelabel image classification, e.g., images from the Pascal VOC dataset (Everingham et al., 2010).
During the testing stage, parts unrelated to the final main output will be removed, as shown in Figure 1 (b). Therefore, in terms of testing, neither extra parameters nor extra computational cost is introduced. The GoCNN is exactly the same as the adopted CNN in the testing phase.
In summary, for an incoming training sample, it passes through all the layers to the final convolution layer. Then the irrelevant features for each group (foreground or background) will be filtered out by privileged segmentation masks. Those filtered features will then flow into a suppressor (see Eqn. (3)). For the output features from each group, it will flow up along two paths: one leads to the group-wise classifier, and the other one leads to the main classifier. The three gradients from the suppressors, the group-wise classifiers and the main classifier will be used for updating the network parameters.
Applications with Incomplete Privileged Information Our proposed GoCNN can also be applied for semi-supervised learning. When only a small subset of images have the privileged seg-
mentation annotations in a dataset, we simply set the segmentations of images without annotations to be Maskf = Maskb = 1, where 1 is the matrix with all of its elements being 1. In other words, we disable both the suppression terms (ref. Eqn. (3)) on foreground and background parts as well as the extractors on the back propagation path. By doing so, fully annotated training samples with privileged information will supervise GoCNN to learn both discriminative and diverse features while the samples with only image tags only guide GoCNN to learn category discriminative features.
5 EXPERIMENTS
5.1 EXPERIMENT SETTINGS AND IMPLEMENTATION DETAILS
Datasets We evaluate the performance of GoCNN in image classification on two benchmark datasets, i.e., the ImageNet (Deng et al., 2009) dataset and the Pascal VOC 2012 dataset (Everingham et al., 2010).
• ImageNet ImageNet contains 1,000 fine-grained classes with about 1,300 images for each class and 1.2 million images in total, but without any image segmentation annotations. To collect privileged information, we randomly select 130 images from each class and manually annotate the object segmentation masks for them. Since our focus is on justifying the effectiveness of our proposed method, rather than pushing the state-of-the-art, we only collect privileged information for 10% data (overall 130k training images) to show performance improvement brought by our model. We call the new dataset consisting of these segmented images as ImageNet-0.1m. For evaluation, we use the original validation set of ImageNet which contains 50,000 images. Note that neither our baselines nor the proposed GoCNN needs segmentation information in testing phase.
• PASCAL VOC 2012 The PASCAL VOC 2012 dataset contains 11,530 images from 20 classes. For the classification task, there are 5,717 images for training and 5,823 images for validation. We use this dataset to further evaluate the generalization ability of different models including GoCNN trained on the ImageNet-0.1m: we pre-train the evaluated models on the ImageNet0.1m dataset and fine-tune them using the logistic regression loss on PASCAL VOC 2012 training set. We evaluate their performance on the validation set.
The Basic Architecture of GoCNN In our experiments, we use the ResNet (He et al., 2015) as the basic architecture to build GoCNN. Since the deepest ResNet contains 152 layers which will cost several weeks to train, we choose a light version of architecture (ResNet-18 (He et al., 2015)) that contains 18 layers as our basic model for most cases. We also use the ResNet-152 (He et al., 2015) for experiments on the full ImageNet dataset. The final convolution layer gives a 7× 7 output and is pooled into a 1× 1 feature map by average pooling. Then a fully connected layer is added to perform linear classification. The used loss function for the single class classification on ImageNet dataset is the standard softmax loss. When performing multi-label classification on PASCAL VOC, we use the logistic regression loss.
Training and Testing Strategy We use MXNet (Chen et al., 2015) to conduct model training and testing. The GoCNN weights are initialized as in He et al. (2015) and we train GoCNN from scratch. Images are resized with a shorter side randomly sampled within [256, 480] for scale augmentation and 224×224 crops are randomly sampled during training (He et al., 2015). We use SGD with base learning rate equal to 0.1 at the beginning and reduce the learning rate by a factor of 10 when the validation accuracy saturates. For the experiments on ResNet-18 we use single node with a minibatch size of 512. For the ResNet-152 we use 48 GPUs with mini-batch size of 32 for each GPU. Following He et al. (2015), we use a weight decay of 0.0001 and a momentum of 0.9 in the training.
We evaluate the performance of GoCNN on two different testing settings: the complete privileged information setting and the partial privileged information setting. We perform 10-crop testing (Krizhevsky et al., 2012) for the complete privileged information scenario, and do a single crop testing for the partial privileged information scenario for convenience.
Compared Baseline Models Our proposed GoCNN follows the Learning Using Privileged Information (LUPI) paradigm (Lapin et al., 2014), which exploits additional information to facilitate
learning but does not require extra information in testing. There are a few baseline models falling into the same paradigm that we can compare with. One is the SVM+ method (Pechyony & Vapnik, 2011) and the other one is the standard model (i.e., the ResNet-18). We simply refer to ResNet-18 by baseline if no confusion occurs. In the experiments, we implement the SVM+ using the code provided by Pechyony & Vapnik (2011) with default parameter settings and linear kernel. We follow the scheme as described in Lapin et al. (2014) to train the SVM+ model. More concretely, we train multiple one-versus-rest SVM+ models upon the deep features extracted from both the entire images and the foreground regions (used as the privileged information). We use the averaged pooling over 10 crops on the feature maps before the FC layer as the deep feature for training SVM+. It is worth noting that all of these models (including SVM+ and GoCNN) use a linear classifier and thus have the same number of parameters, or more concretely, GoCNN does not require more parameters than SVM+ and the vanilla ResNet.
5.2 TRAINING MODELS WITH COMPLETE PRIVILEGED INFORMATION
In this subsection, we consider the scenario where every training sample has complete privileged segmentation information. Firstly, we evaluate the performance of our proposed GoCNN on the ImageNet-0.1m dataset. Table 1 summarizes the accuracy of different models. As can be seen from the results, given the complete privileged information, our proposed GoCNN presents much better performance than compared models. The group orthogonal constraints successfully regularize the learned feature to be within the foreground and background. The trained GoCNN thus presents a stronger generalization ability. It is also interesting (although not surprising) to observe that, when foreground features with background features are combined, the performance of GoCNN can be further improved from 49.60% to 50.39% in terms of top-1 accuracy. One can observe that the background information indeed benefits object recognition to some extent. To further investigate the contribution of each component within GoCNN to final performance, we conduct another experiment and show the results in Table 2. In the experiments, we purposively prevent the gradient propagation from the other components except the one being investigated during training, and perform another setting on the baseline method where the background is removed and only the foreground object is reserved in each training sample, noted as Baseline-obj. Comparing the result of Full GoCNN between different classifiers, we can see that learning background features can actually improve the overall performance. And when we compare the Fg classifier between Baseline-obj, Only Fg and Full GoCNN, we can see the importance of the background information in training more robust and richer foreground features.
Secondly, to verify the effectiveness of learning features in two different groups with our proposed method, we visualize the maximum activation value within each group of feature maps of several testing images. The feature maps are generated by the final convolution layer with 384 × 384 resolution input testing images. Then, the final convolution layer gives 12 × 12 output maps. We aggregate feature maps within the same group into one feature map by max operation. As can be seen from Figure 2, foreground and background features are well separated and the result looks just like the semantic segmentation mask. Compared with the baseline model, more neurons are activated in our proposed method in the two orthogonal spaces. This indicates that more diverse and discriminative features are learned in our framework compared with the baseline method. Finally, we further evaluate the generalization ability of our proposed method on the PASCAL VOC dataset. It is well known that an object shares many common properties with others even if they are not from the same category. A well-performing CNN model should be able to learn robust features rather than just fit the training images. In this experiment, we fine-tune different models on the PASCAL VOC images to test whether the learned features are able to generalize well to another dataset. Note that
we add another convolution layer with a 1 × 1 kernel size and 512 outputs as an adaptive layer on all models. It is not necessary to add such a layer in networks without a residual structure (He et al., 2015). As can be seen from Table 3, our proposed network shows better results and higher average precision across all categories, which means our proposed GoCNN learns more representative and richer features that are easier to transfer from one domain to another.
5.3 TRAINING GOCNN WITH PARTIAL PRIVILEGED INFORMATION
In this subsection, we investigate the performance of different models with only using partial privileged information. The experiment is also conducted on the ImageNet-0.1m dataset. We evaluate the performance of our proposed GoCNN by varying the percentage of privileged information (i.e., percentage of training images with segmentation annotations) from 20% to 100%.
The validation accuracies of GoCNN and the baseline model (i.e., the ResNet-18) are shown in Table 4. From the results, one can observe that with the increasing percentage of privileged information, the accuracy will continuously increase until the percentage of privileged information reaches 80%. The performance on increasing the percentage from 40% to 100% is only 0.71% compared with 0.92% on the increasing from 20% to 40%. This is probably because the suppression losses are more effective than we expected; that is, with very little guidance from the suppression
loss, the network can already be able to separate foreground and background features and explore new features within each group.
To verify the effectiveness of GoCNN on very large training dataset with more complex CNN architectures, we conducted another experiment on the complete ImageNet-1k dataset with only 10% privileged information, and we use the 152-layer ResNet as our basic model. As can be seen from Table 5, our proposed GoCNN achieves 21.8% top-1 error while the vanilla ResNet-152 has 23.0% top-1 error. Such performance boost is consistent with the results shown in Table 4, which again confirms the effectiveness of the GoCNN.
6 DISCUSSIONS
Based on our experimental results, we can also provide answers to the following two important questions.
Does background information indeed help object recognition for deep learning methods? Based on our experiments, we give a positive answer. Intuitively, background information may provide some “hints” for object recognition. However, though several works (Song et al., 2011; Russakovsky et al., 2012) have proven the usefulness of background information when using handcraft features, few works have studied the effectiveness of background information on deep learning methods for object recognition tasks. Based on the experimental results shown in Table 2, both the foreground classification accuracy and the overall classification accuracy can be further boosted with our proposed framework. This means that the background deep features can also provide useful information for foreground object recognition.
Can a more precise annotation with richer information, e.g., segmentation annotation, assist the image classification training process? The answer is clearly yes. In fact, in recent years, several works have explored how object detection and segmentation can benefit each other (Dai et al., 2015; Hariharan et al., 2014). However, none of existing works has studied how image segmentation information can help train a better classification deep neural network. In this work, by treating the segmentation annotations as the privileged information, we first demonstrate a possible way to utilize segmentation annotations to assist image classification training.
7 CONCLUSION
We proposed a group orthogonal neural network for image classification which encourages learning more diverse feature representations. Privileged information is utilized to train the proposed GoCNN model. To the best of our knowledge, we are the first to explore how to use image segmentation as privileged information to assist CNN training for image classification. | 1. What is the main contribution of the paper regarding feature learning in convnets?
2. What are the strengths and weaknesses of the proposed approach, particularly in its use of foreground segmentation masks and group-wise model diversity loss term?
3. How does the reviewer assess the effectiveness of the method, especially with regards to its ability to handle incomplete privileged information?
4. What are some of the concerns or confusions raised by the reviewer regarding the presentation and implementation details of the paper?
5. How do the results in Table 4 compare to those in Tables 1-2, and what might explain any differences? | Review | Review
This paper proposes to learn groups of orthogonal features in a convnet by penalizing correlation among features in each group. The technique is applied in the setting of image classification with “privileged information” in the form of foreground segmentation masks, where the model is trained to learn orthogonal groups of foreground and background features using the correlation penalty and an additional “background suppression” term.
Pros:
Proposes a “group-wise model diversity” loss term which is novel, to my knowledge.
The use of foreground segmentation masks to improve image classification is also novel.
The method is evaluated on two standard and relatively large-scale vision datasets: ImageNet and PASCAL VOC 2012.
Cons:
The evaluation is lacking. There should be a baseline that leaves out the background suppression term, so readers know how much that term is contributing to the performance vs. the group orthogonal term. The use of the background suppression term is also confusing to me -- it seems redundant, as the group orthogonality term should already serve to suppress the use of background features by the foreground feature extractor.
It would be nice to see the results with “Incomplete Privileged Information” on the full ImageNet dataset (rather than just 10% of it) with the privileged information included for the 10% of images where it’s available. This would verify that the method and use of segmentation masks remains useful even in the regime of more labeled classification data.
The presentation overall is a bit confusing and difficult to follow, for me. For example, Section 4.2 is titled “A Unified Architecture: GoCNN”, yet it is not an overview of the method as a whole, but a list of specific implementation details (even the very first sentence).
Minor: calling eq 3 a “regression loss” and writing “||0 - x||” rather than just “||x||” is not necessary and makes understanding more difficult -- I’ve never seen a norm regularization term written this way or described as a “regression to 0”.
Minor: in fig. 1 I think the FG and BG suppression labels are swapped: e.g., the “suppress foreground” mask has 1s in the FG and 0s in the BG (which would suppress the BG, not the FG).
An additional question: why are the results in Table 4 with 100% privileged information different from those in Table 1-2? Are these not the same setting?
The ideas presented in this paper are novel and show some promise, but are currently not sufficiently ablated for readers to understand what aspects of the method are important. Besides additional experiments, the paper could also use some reorganization and revision for clarity.
===============
Edit (1/29/17): after considering the latest revisions -- particularly the full ImageNet evaluation results reported in Table 5 demonstrating that the background segmentation 'privileged information' is beneficial even with the full labeled ImageNet dataset -- I've upgraded my rating from 4 to 6.
(I'll reiterate a very minor point about Figure 1 though: I still think the "0" and "1" labels in the top part of the figures should be swapped to match the other labels. e.g., the topmost path in figure 1a, with the text "suppress foreground", currently has 0 in the background and 1 in the foreground, when one would want the reverse of this to suppress the foreground.) |
ICLR | Title
Training Group Orthogonal Neural Networks with Privileged Information
Abstract
Learning rich and diverse feature representation are always desired for deep convolutional neural networks (CNNs). Besides, when auxiliary annotations are available for specific data, simply ignoring them would be a great waste. In this paper, we incorporate these auxiliary annotations as privileged information and propose a novel CNN model that is able to maximize inherent diversity of a CNN model such that the model can learn better feature representation with a stronger generalization ability. More specifically, we propose a group orthogonal convolutional neural network (GoCNN) to learn features from foreground and background in an orthogonal way by exploiting privileged information for optimization, which automatically emphasizes feature diversity within a single model. Experiments on two benchmark datasets, ImageNet and PASCAL VOC, well demonstrate the effectiveness and high generalization ability of our proposed GoCNN models.
N/A
Learning rich and diverse feature representation are always desired for deep convolutional neural networks (CNNs). Besides, when auxiliary annotations are available for specific data, simply ignoring them would be a great waste. In this paper, we incorporate these auxiliary annotations as privileged information and propose a novel CNN model that is able to maximize inherent diversity of a CNN model such that the model can learn better feature representation with a stronger generalization ability. More specifically, we propose a group orthogonal convolutional neural network (GoCNN) to learn features from foreground and background in an orthogonal way by exploiting privileged information for optimization, which automatically emphasizes feature diversity within a single model. Experiments on two benchmark datasets, ImageNet and PASCAL VOC, well demonstrate the effectiveness and high generalization ability of our proposed GoCNN models.
1 INTRODUCTION
Deep convolutional neural networks (CNNs) have brought a series of breakthroughs in image classification tasks (He et al., 2015; Girshick, 2015; Zheng et al., 2015). Many recent works (Simonyan & Zisserman, 2014; He et al., 2015; Krizhevsky et al., 2012) have observed that CNNs with different architectures or even different weight initializations may learn slightly different feature representations. Combining these heterogeneous models can provide richer and more diverse feature representation which can further boost the final performance. Such observation motivate us to directly pursue feature diversity within a single model in the work.
Besides, many existing datasets (Everingham et al., 2010; Deng et al., 2009; Xiao et al., 2010) provide more than one types of annotations. For example, the PASCAL VOC (Everingham et al., 2010) provides image level tags, object bounding box, and image segmentation masks; the ImageNet dataset (Deng et al., 2009) provide image level tags and a small portion of bounding box. Only using the image level tags for training image classification model would be a great waste on the other annotation resources. Therefore, in this work, we investigate whether these auxiliary annotations could also help a CNN model learn richer and more diverse feature representation.
In particular, we take advantage of these extra annotated information during training a CNN model for obtaining a single CNN model with sufficient inherent diversity, with the expectation that the model is able to learn more diverse feature representations and offers stronger generalization ability for image classification than vanilla CNNs. We therefore propose a group orthogonal convolutional neural network (GoCNN) model that is able to exploit these extra annotated information as privileged information. The idea is to learn different groups of convolutional functions which are “orthogonal” to the ones in other groups. Here by “orthogonal”, we mean there is no significant correlation among the produced features. By “privileged information”, we mean these auxiliary information only been used during the training phase. Optimizing orthogonality among convolutional functions reduces redundancy and increases diversity within the architecture.
Properly defining the groups of convolutional functions in the GoCNN is not an easy task. In this work, we propose to exploit available privileged information for identifying the proper groups. Specifically, in the context of image classification, object segmentation annotations which are (partially) available in several public datasets give richer information.
In addition, the background contents are usually independent on foreground objects within an image. Thus, splitting convolutional functions into different groups and enforcing them to learn features from the foreground and background separately can help construct orthogonal groups with small correlations. Motivated by this, we introduce the GoCNN architecture which explores to learn discriminative features from foreground and background separately where the foreground-background segregation is offered by the privileged segmentation annotation for training GoCNN. In this way, inherent diversity of the GoCNN can be explicitly enhanced. Moreover, benefiting from pursuing the group orthogonality, the learned convolutional functions within GoCNN are demonstrated to be foreground and background diagnostic even for extracting features from new images in the testing phase.
To the best of our knowledge, this work is the first to explore a principled way to train a deep neural network with desired inherent diversity and the first to investigate how to use the segmentation privileged information to assist image classification within a deep learning architecture. Experiments on ImageNet and PASCAL VOC clearly demonstrate GoCNN improves upon vanilla CNN models significantly, in terms of classification accuracy.
As a by-product of implementing GoCNN, we also provide positive answers to the following two prominent questions about image classification: (1) Does background information indeed help object recognition in deep learning? (2) Can a more precise annotation with richer information, e.g., segmentation annotation, assist the image classification training process non-trivially?
2 RELATED WORK
Learning rich and diverse feature representations is always desired while training CNNs for gaining stronger generalization ability. However, most existing works mainly focus on introducing handcrafted cost functions to implicitly pursue diversity (Tang, 2013), or modifying activation functions to increase model non-linearity (Jin et al., 2015) or constructing a more complex CNN architecture (Simonyan & Zisserman, 2014; He et al., 2015; Krizhevsky et al., 2012). Methods that explicitly encourage inherent diversity of CNN models are still rare so far.
Knowledge distillation (Hinton et al., 2015) can be seen as an effective way to learn more discriminative and diverse feature representations. The distillation process compresses knowledge and thus encourages a weak model to learn more diverse and discriminative features. However, knowledge distillation works in two stages which are isolated from each other and has to rely on pre-training a complicated teacher network model. This may introduce undesired computation overhead. In contrast, our proposed approach can learn a diverse network in a single stage without requiring an extra network model. Similar works, e.g. the Diversity Networks (Sra & Hosseini), also squeeze the knowledge by preserving the most diverse features to avoid harming the performance.
More recently, Cogswell et al. (2016) proposed the DeCov approach to reduce over-fitting risk of a deep neural network model by reducing feature covariance. DeCov also agrees with increasing generalization ability of a model by pursuing feature diversity. This is consistent with our motivation. However, DeCov penalizes the covariance in an unsupervised fashion and cannot utilize extra available annotations, leading to insignificant performance improvement over vanilla models (Cogswell et al., 2016).
Using privileged information to learn better features during the training process is similar in spirit with our method. Both our proposed method and Lapin et al. (2014) introduce privileged information to assist the training process. However, almost all existing works (Lapin et al., 2014; Lopez-Paz et al., 2016; Sharmanska et al., 2014) are based on SVM+ which only focuses on training a better classifier and is not able to do the end-to-end training for better features.
Several works (Andrew et al., 2013; Srivastava & Salakhutdinov, 2012) about canonical correlation analysis (CCA) for CNNs provide a way to constrain feature diversity. However, the goal of CCA
is to find linear projections for two random vectors that are maximally correlated, which is different from ours.
It is also worth to notice that simply adding a segmentation loss to image classification neural network is not equivalent to a GoCNN model. This is because image segmentation requires each pixel within the target area to be activated and the others stay silent for dense prediction, while GoCNN does not require the each pixel within the target area to be activated. GoCNN is specifically designed for classification tasks, not for segmentation ones. Moreover, our proposed GoCNN supports learning from partial privileged information wile the CNN above needs a fully annotated training set.
3 MODEL DIVERSITY OF CONVOLUTIONAL NEURAL NETWORKS
Throughout the paper, we use f(·) to denote a convolutional function (or filter) and k to index the layers in a multi-layer network. We use c(k) to denote the total number of convolutional functions at the k-th layer and use i and j to index different functions, i.e., f (k)i (·) denotes the i-th convolutional function at the k-th layer of the network. The function f maps an input feature map to another new feature map. The height and the width of a feature map output at the layer k are denoted as h(k) and w(k) respectively. We consider a network model consisting of N layers in total.
Under a standard CNN architecture, the elements within the same feature map are produced by the same convolutional function f (k)i and thus they represent the same type of features across different locations. Therefore, encouraging the feature variance or diversity within a single feature map does not make sense. In this work, our target is to enhance the diversity among different convolutional functions. Here we first give a formal description of model diversity for an N -layer CNN.
Definition 1 (Model Diversity). Let f (k)i denote the i-th convolutional function at the k-th layer of a neural network model, and then the model diversity of the k-th layer is defined as
ζ(k) , 1− 1 c(k) 2 c(k)∑ i,j=1 cor ( f (k) i , f (k) j ) . (1)
Here the operator cor(·, ·) denotes the statistical correlation.
In other words, the inherent diversity of a network model that we are going to maximize is evaluated across all the convolutional functions within the same layer.
The most straightforward way to maximize the above diversity for each layer is to directly maximize the quantity of ζ(k) during training the network. However, it is quite involved to optimize the hard diversity in (1) due to the large combination number of different convolutional functions. Thus, we propose to solve this problem by learning the convolutional functions in different groups separately. Different functions from different groups are uncorrelated to each other and we do not need to consider their correlation. Suppose the convolutional functions at each layer are partitioned into m different groups, denoted as G = {G1, . . . , Gm}. Then, we instead maximize the following Group-wise Model Diversity.
Definition 2 (Group-wise Model Diversity). Given a pre-defined group partition set G = {G1, . . . , Gm} of convolutional functions at a specific layer, the group-wise model diversity of this layer is defined as
ζ(k)g , 1− 1
c(k) 2 |G|∑ s,t=1 ∑ i∈Gs,j∈Gt cor ( f (k) i , f (k) j ) .
Instead of directly optimizing the model diversity, we consider optimizing the group-wise model diversity by finding a set of orthogonal groups {G∗1, . . . , G∗m}, where convolutional functions within each group are uncorrelated with others within different groups. In the scenario of image representation learning, one typical example of such orthogonal groups is the foreground group and background group pair — partitioning the functions into two groups and letting them learn features from foreground and background contents respectively.
In this work, we use segmentation annotation as privileged information for finding orthogonal groups of convolutional functions G∗1, . . . , G ∗ m. In particular, we derive the foreground and background segregation from the privileged information for an image. Then we partition convolutional functions at a specific layer of a CNN model into foreground and background groups respectively, and train a GoCNN model to learn the foreground and background features separately. Details about the architecture of the GoCNN and the training procedure of GoCNN are given in the following section.
4 GROUP ORTHOGONAL CONVOLUTIONAL NEURAL NETWORKS
We introduce the group orthogonal constraint to maximize group-wise diversity among different groups of convolutional functions explicitly by constructing a group orthogonal convolutional neural network (GoCNN). Details on the architecture of GoCNN are shown in Figure 1. GoCNN is built upon a standard CNN architecture. The convolutional functions at the final convolution layer are explicitly divided into two groups: the foreground group which concentrates on learning the foreground feature and the background group which learns the background feature. The output features of these two groups are then aggregated by a fully connected layer.
In the following subsections, we give more details of the foreground and background groups construction. After that, we will describe how to combine these two components and build them into a unified network architecture — the GoGNN.
4.1 FOREGROUND AND BACKGROUND GROUPS
To learn convolutional functions that are specific for foreground content of an image, we propose the following two constraints for the foreground group of functions. The first constraint forces the functions to be learned from the foreground only and free of any contamination from the background, and the second constraint encourages the learned functions to be discriminative for image classification.
We learn features that only lie in the foreground by suppressing any contamination from the background. As aforementioned, here we use the object segmentation annotations (denoted as Mask) as the privileged information in the training phase to help identify the background features where the foreground convolutional functions should not respond to. The background contamination is extracted by an extractor adopted on each feature map within the foreground group. In particular, we define an extractor ϕ(·, ·) as follows:
ϕ(f (k) i (x),Mask) , f (k) i (x) Mask, (2)
where x denotes the raw input and denotes the element-wise multiplication. In the above operator, we use the background object mask Maskb to extract background features. Each element in Maskb is equal to one if the corresponding position lies on a background object or zero otherwise. Here, we assume the masks are already re-sized to have compatible dimensionality with the output feature map f (k)i (x) by the interpolation method so that the element-wise multiplication is valid here. The extracted background features are then suppressed by a regression loss defined as follows:
min θ ∑ i ‖ϕ(f (k)i (x; θ),Maskb)‖F . (3)
Here θ parameterizes the convolution function f (k)i . Since the target value for this regression is zero, we also call it a suppression term. It will only suppress the response output by f (k)i at the locations outside the foreground.
For the second constraint, i.e., encouraging the functions to learn discriminative features, we simply use the standard softmax classification loss to supervise the learning phase.
The role of the background group is complementary to the foreground one. It aims to learn convolutional functions that are only specific for background contents. Thus, the functions within the background group have a same suppression term as in Eqn. (3), in which Maskb is replaced with Maskf to restrict the learned features to make them only lie in the background space. The Maskf is simply computed as Maskf = 1 − Maskb. Also, a softmax linear classifier is attached during training to guarantee that these learned background functions are useful for predicting image categories.
4.2 ARCHITECTURE AND IMPLEMENTATION DETAILS OF THE GOCNN
In GoCNN, the size ratio of foreground group and background group is fixed to be 3:1 during training, since intuitively the foreground contents are much more informative than the background contents in classifying images. A single fully connected layer (or multiple layers depending on the basic CNN architecture) is used to unify the functional learning within different groups and combine features learned from different groups. It aggregates the information from different feature spaces and produces the final image category prediction. More details are given in Figure 1.
Because we are dealing with the classification problem, a main classifier with a standard classification loss function is adopted at the top layer of GoCNN. In our experiments, the standard softmax loss is used for single-label image classification and the logistic regression loss is used for multiplelabel image classification, e.g., images from the Pascal VOC dataset (Everingham et al., 2010).
During the testing stage, parts unrelated to the final main output will be removed, as shown in Figure 1 (b). Therefore, in terms of testing, neither extra parameters nor extra computational cost is introduced. The GoCNN is exactly the same as the adopted CNN in the testing phase.
In summary, for an incoming training sample, it passes through all the layers to the final convolution layer. Then the irrelevant features for each group (foreground or background) will be filtered out by privileged segmentation masks. Those filtered features will then flow into a suppressor (see Eqn. (3)). For the output features from each group, it will flow up along two paths: one leads to the group-wise classifier, and the other one leads to the main classifier. The three gradients from the suppressors, the group-wise classifiers and the main classifier will be used for updating the network parameters.
Applications with Incomplete Privileged Information Our proposed GoCNN can also be applied for semi-supervised learning. When only a small subset of images have the privileged seg-
mentation annotations in a dataset, we simply set the segmentations of images without annotations to be Maskf = Maskb = 1, where 1 is the matrix with all of its elements being 1. In other words, we disable both the suppression terms (ref. Eqn. (3)) on foreground and background parts as well as the extractors on the back propagation path. By doing so, fully annotated training samples with privileged information will supervise GoCNN to learn both discriminative and diverse features while the samples with only image tags only guide GoCNN to learn category discriminative features.
5 EXPERIMENTS
5.1 EXPERIMENT SETTINGS AND IMPLEMENTATION DETAILS
Datasets We evaluate the performance of GoCNN in image classification on two benchmark datasets, i.e., the ImageNet (Deng et al., 2009) dataset and the Pascal VOC 2012 dataset (Everingham et al., 2010).
• ImageNet ImageNet contains 1,000 fine-grained classes with about 1,300 images for each class and 1.2 million images in total, but without any image segmentation annotations. To collect privileged information, we randomly select 130 images from each class and manually annotate the object segmentation masks for them. Since our focus is on justifying the effectiveness of our proposed method, rather than pushing the state-of-the-art, we only collect privileged information for 10% data (overall 130k training images) to show performance improvement brought by our model. We call the new dataset consisting of these segmented images as ImageNet-0.1m. For evaluation, we use the original validation set of ImageNet which contains 50,000 images. Note that neither our baselines nor the proposed GoCNN needs segmentation information in testing phase.
• PASCAL VOC 2012 The PASCAL VOC 2012 dataset contains 11,530 images from 20 classes. For the classification task, there are 5,717 images for training and 5,823 images for validation. We use this dataset to further evaluate the generalization ability of different models including GoCNN trained on the ImageNet-0.1m: we pre-train the evaluated models on the ImageNet0.1m dataset and fine-tune them using the logistic regression loss on PASCAL VOC 2012 training set. We evaluate their performance on the validation set.
The Basic Architecture of GoCNN In our experiments, we use the ResNet (He et al., 2015) as the basic architecture to build GoCNN. Since the deepest ResNet contains 152 layers which will cost several weeks to train, we choose a light version of architecture (ResNet-18 (He et al., 2015)) that contains 18 layers as our basic model for most cases. We also use the ResNet-152 (He et al., 2015) for experiments on the full ImageNet dataset. The final convolution layer gives a 7× 7 output and is pooled into a 1× 1 feature map by average pooling. Then a fully connected layer is added to perform linear classification. The used loss function for the single class classification on ImageNet dataset is the standard softmax loss. When performing multi-label classification on PASCAL VOC, we use the logistic regression loss.
Training and Testing Strategy We use MXNet (Chen et al., 2015) to conduct model training and testing. The GoCNN weights are initialized as in He et al. (2015) and we train GoCNN from scratch. Images are resized with a shorter side randomly sampled within [256, 480] for scale augmentation and 224×224 crops are randomly sampled during training (He et al., 2015). We use SGD with base learning rate equal to 0.1 at the beginning and reduce the learning rate by a factor of 10 when the validation accuracy saturates. For the experiments on ResNet-18 we use single node with a minibatch size of 512. For the ResNet-152 we use 48 GPUs with mini-batch size of 32 for each GPU. Following He et al. (2015), we use a weight decay of 0.0001 and a momentum of 0.9 in the training.
We evaluate the performance of GoCNN on two different testing settings: the complete privileged information setting and the partial privileged information setting. We perform 10-crop testing (Krizhevsky et al., 2012) for the complete privileged information scenario, and do a single crop testing for the partial privileged information scenario for convenience.
Compared Baseline Models Our proposed GoCNN follows the Learning Using Privileged Information (LUPI) paradigm (Lapin et al., 2014), which exploits additional information to facilitate
learning but does not require extra information in testing. There are a few baseline models falling into the same paradigm that we can compare with. One is the SVM+ method (Pechyony & Vapnik, 2011) and the other one is the standard model (i.e., the ResNet-18). We simply refer to ResNet-18 by baseline if no confusion occurs. In the experiments, we implement the SVM+ using the code provided by Pechyony & Vapnik (2011) with default parameter settings and linear kernel. We follow the scheme as described in Lapin et al. (2014) to train the SVM+ model. More concretely, we train multiple one-versus-rest SVM+ models upon the deep features extracted from both the entire images and the foreground regions (used as the privileged information). We use the averaged pooling over 10 crops on the feature maps before the FC layer as the deep feature for training SVM+. It is worth noting that all of these models (including SVM+ and GoCNN) use a linear classifier and thus have the same number of parameters, or more concretely, GoCNN does not require more parameters than SVM+ and the vanilla ResNet.
5.2 TRAINING MODELS WITH COMPLETE PRIVILEGED INFORMATION
In this subsection, we consider the scenario where every training sample has complete privileged segmentation information. Firstly, we evaluate the performance of our proposed GoCNN on the ImageNet-0.1m dataset. Table 1 summarizes the accuracy of different models. As can be seen from the results, given the complete privileged information, our proposed GoCNN presents much better performance than compared models. The group orthogonal constraints successfully regularize the learned feature to be within the foreground and background. The trained GoCNN thus presents a stronger generalization ability. It is also interesting (although not surprising) to observe that, when foreground features with background features are combined, the performance of GoCNN can be further improved from 49.60% to 50.39% in terms of top-1 accuracy. One can observe that the background information indeed benefits object recognition to some extent. To further investigate the contribution of each component within GoCNN to final performance, we conduct another experiment and show the results in Table 2. In the experiments, we purposively prevent the gradient propagation from the other components except the one being investigated during training, and perform another setting on the baseline method where the background is removed and only the foreground object is reserved in each training sample, noted as Baseline-obj. Comparing the result of Full GoCNN between different classifiers, we can see that learning background features can actually improve the overall performance. And when we compare the Fg classifier between Baseline-obj, Only Fg and Full GoCNN, we can see the importance of the background information in training more robust and richer foreground features.
Secondly, to verify the effectiveness of learning features in two different groups with our proposed method, we visualize the maximum activation value within each group of feature maps of several testing images. The feature maps are generated by the final convolution layer with 384 × 384 resolution input testing images. Then, the final convolution layer gives 12 × 12 output maps. We aggregate feature maps within the same group into one feature map by max operation. As can be seen from Figure 2, foreground and background features are well separated and the result looks just like the semantic segmentation mask. Compared with the baseline model, more neurons are activated in our proposed method in the two orthogonal spaces. This indicates that more diverse and discriminative features are learned in our framework compared with the baseline method. Finally, we further evaluate the generalization ability of our proposed method on the PASCAL VOC dataset. It is well known that an object shares many common properties with others even if they are not from the same category. A well-performing CNN model should be able to learn robust features rather than just fit the training images. In this experiment, we fine-tune different models on the PASCAL VOC images to test whether the learned features are able to generalize well to another dataset. Note that
we add another convolution layer with a 1 × 1 kernel size and 512 outputs as an adaptive layer on all models. It is not necessary to add such a layer in networks without a residual structure (He et al., 2015). As can be seen from Table 3, our proposed network shows better results and higher average precision across all categories, which means our proposed GoCNN learns more representative and richer features that are easier to transfer from one domain to another.
5.3 TRAINING GOCNN WITH PARTIAL PRIVILEGED INFORMATION
In this subsection, we investigate the performance of different models with only using partial privileged information. The experiment is also conducted on the ImageNet-0.1m dataset. We evaluate the performance of our proposed GoCNN by varying the percentage of privileged information (i.e., percentage of training images with segmentation annotations) from 20% to 100%.
The validation accuracies of GoCNN and the baseline model (i.e., the ResNet-18) are shown in Table 4. From the results, one can observe that with the increasing percentage of privileged information, the accuracy will continuously increase until the percentage of privileged information reaches 80%. The performance on increasing the percentage from 40% to 100% is only 0.71% compared with 0.92% on the increasing from 20% to 40%. This is probably because the suppression losses are more effective than we expected; that is, with very little guidance from the suppression
loss, the network can already be able to separate foreground and background features and explore new features within each group.
To verify the effectiveness of GoCNN on very large training dataset with more complex CNN architectures, we conducted another experiment on the complete ImageNet-1k dataset with only 10% privileged information, and we use the 152-layer ResNet as our basic model. As can be seen from Table 5, our proposed GoCNN achieves 21.8% top-1 error while the vanilla ResNet-152 has 23.0% top-1 error. Such performance boost is consistent with the results shown in Table 4, which again confirms the effectiveness of the GoCNN.
6 DISCUSSIONS
Based on our experimental results, we can also provide answers to the following two important questions.
Does background information indeed help object recognition for deep learning methods? Based on our experiments, we give a positive answer. Intuitively, background information may provide some “hints” for object recognition. However, though several works (Song et al., 2011; Russakovsky et al., 2012) have proven the usefulness of background information when using handcraft features, few works have studied the effectiveness of background information on deep learning methods for object recognition tasks. Based on the experimental results shown in Table 2, both the foreground classification accuracy and the overall classification accuracy can be further boosted with our proposed framework. This means that the background deep features can also provide useful information for foreground object recognition.
Can a more precise annotation with richer information, e.g., segmentation annotation, assist the image classification training process? The answer is clearly yes. In fact, in recent years, several works have explored how object detection and segmentation can benefit each other (Dai et al., 2015; Hariharan et al., 2014). However, none of existing works has studied how image segmentation information can help train a better classification deep neural network. In this work, by treating the segmentation annotations as the privileged information, we first demonstrate a possible way to utilize segmentation annotations to assist image classification training.
7 CONCLUSION
We proposed a group orthogonal neural network for image classification which encourages learning more diverse feature representations. Privileged information is utilized to train the proposed GoCNN model. To the best of our knowledge, we are the first to explore how to use image segmentation as privileged information to assist CNN training for image classification. | 1. What is the main contribution of the paper, and how does it aim to improve ConvNet training?
2. What are the strengths and weaknesses of the proposed approach, particularly regarding its ability to replace ensembles and reduce computational costs?
3. Do you have any concerns about the experimental design and methodology, especially regarding the introduction of background feature suppression?
4. How does the reviewer assess the clarity and focus of the paper's content, including the abstract and introduction?
5. Are there any minor issues or suggestions for improvement in the paper, such as errors in equations or figures? | Review | Review
This paper proposes a modification to ConvNet training so that the feature activations before the linear classifier are divided into groups such that all pairs of features across all pairs of groups are encouraged to have low statistical correlation. Instead of discovering the groups automatically, the work proposes to use supervision, which they call privileged information, to assign features to groups in a hand-coded fashion. The developed method is applied to image classification.
Pros:
- The paper is clear and easy to follow
- The experimental results seem to show some benefit from the proposed approach
Cons:
(1) The paper proposes one core idea (group orthogonality w/ privileged information), but then introduces background feature suppression without much motivation and without careful experimentation
(2) No comparison with an ensemble
(3) Full experiments on ImageNet under the "partial privileged information" setting would be more impactful
This paper is promising and I would be willing to accept an improved version. However, the current version lacks focus and clean experiments.
First, the abstract and intro focus on the need to replace ensembles with a single model that has diverse (ensemble like) features. The hope is that such a model will have the same boost in accuracy, while requiring fewer FLOPs and less memory. Based on this introduction, I expect the rest of the paper to focus on this point. But it does not; there are no experimental results on ensembles and no experimental evidence that the proposed approach in able to avoid the speed and memory cost of ensembles while also retaining the accuracy benefit.
Second, the technical contribution of the paper is presented as group orthogonality (GO). However, in Sec 4.1 the idea of background feature suppression is introduced. While some motivation for it is given, the motivation does not tie into GO. GO does not require bg suppression and the introduction of it seems ad hoc. Moreover, the experiments never decouple GO and bg suppression, so we are unable to understand how GO works on its own. This is a critical experimental flaw in my reading.
Minor suggestions / comments:
- The equation in definition 2 has an incorrect normalizing factor (1/c^(k)^2)
- Figure 1 seems to have incorrect mask placements. The top mask is one that will mask out the background and only allow the fg to pass |
ICLR | Title
Training Group Orthogonal Neural Networks with Privileged Information
Abstract
Learning rich and diverse feature representation are always desired for deep convolutional neural networks (CNNs). Besides, when auxiliary annotations are available for specific data, simply ignoring them would be a great waste. In this paper, we incorporate these auxiliary annotations as privileged information and propose a novel CNN model that is able to maximize inherent diversity of a CNN model such that the model can learn better feature representation with a stronger generalization ability. More specifically, we propose a group orthogonal convolutional neural network (GoCNN) to learn features from foreground and background in an orthogonal way by exploiting privileged information for optimization, which automatically emphasizes feature diversity within a single model. Experiments on two benchmark datasets, ImageNet and PASCAL VOC, well demonstrate the effectiveness and high generalization ability of our proposed GoCNN models.
N/A
Learning rich and diverse feature representation are always desired for deep convolutional neural networks (CNNs). Besides, when auxiliary annotations are available for specific data, simply ignoring them would be a great waste. In this paper, we incorporate these auxiliary annotations as privileged information and propose a novel CNN model that is able to maximize inherent diversity of a CNN model such that the model can learn better feature representation with a stronger generalization ability. More specifically, we propose a group orthogonal convolutional neural network (GoCNN) to learn features from foreground and background in an orthogonal way by exploiting privileged information for optimization, which automatically emphasizes feature diversity within a single model. Experiments on two benchmark datasets, ImageNet and PASCAL VOC, well demonstrate the effectiveness and high generalization ability of our proposed GoCNN models.
1 INTRODUCTION
Deep convolutional neural networks (CNNs) have brought a series of breakthroughs in image classification tasks (He et al., 2015; Girshick, 2015; Zheng et al., 2015). Many recent works (Simonyan & Zisserman, 2014; He et al., 2015; Krizhevsky et al., 2012) have observed that CNNs with different architectures or even different weight initializations may learn slightly different feature representations. Combining these heterogeneous models can provide richer and more diverse feature representation which can further boost the final performance. Such observation motivate us to directly pursue feature diversity within a single model in the work.
Besides, many existing datasets (Everingham et al., 2010; Deng et al., 2009; Xiao et al., 2010) provide more than one types of annotations. For example, the PASCAL VOC (Everingham et al., 2010) provides image level tags, object bounding box, and image segmentation masks; the ImageNet dataset (Deng et al., 2009) provide image level tags and a small portion of bounding box. Only using the image level tags for training image classification model would be a great waste on the other annotation resources. Therefore, in this work, we investigate whether these auxiliary annotations could also help a CNN model learn richer and more diverse feature representation.
In particular, we take advantage of these extra annotated information during training a CNN model for obtaining a single CNN model with sufficient inherent diversity, with the expectation that the model is able to learn more diverse feature representations and offers stronger generalization ability for image classification than vanilla CNNs. We therefore propose a group orthogonal convolutional neural network (GoCNN) model that is able to exploit these extra annotated information as privileged information. The idea is to learn different groups of convolutional functions which are “orthogonal” to the ones in other groups. Here by “orthogonal”, we mean there is no significant correlation among the produced features. By “privileged information”, we mean these auxiliary information only been used during the training phase. Optimizing orthogonality among convolutional functions reduces redundancy and increases diversity within the architecture.
Properly defining the groups of convolutional functions in the GoCNN is not an easy task. In this work, we propose to exploit available privileged information for identifying the proper groups. Specifically, in the context of image classification, object segmentation annotations which are (partially) available in several public datasets give richer information.
In addition, the background contents are usually independent on foreground objects within an image. Thus, splitting convolutional functions into different groups and enforcing them to learn features from the foreground and background separately can help construct orthogonal groups with small correlations. Motivated by this, we introduce the GoCNN architecture which explores to learn discriminative features from foreground and background separately where the foreground-background segregation is offered by the privileged segmentation annotation for training GoCNN. In this way, inherent diversity of the GoCNN can be explicitly enhanced. Moreover, benefiting from pursuing the group orthogonality, the learned convolutional functions within GoCNN are demonstrated to be foreground and background diagnostic even for extracting features from new images in the testing phase.
To the best of our knowledge, this work is the first to explore a principled way to train a deep neural network with desired inherent diversity and the first to investigate how to use the segmentation privileged information to assist image classification within a deep learning architecture. Experiments on ImageNet and PASCAL VOC clearly demonstrate GoCNN improves upon vanilla CNN models significantly, in terms of classification accuracy.
As a by-product of implementing GoCNN, we also provide positive answers to the following two prominent questions about image classification: (1) Does background information indeed help object recognition in deep learning? (2) Can a more precise annotation with richer information, e.g., segmentation annotation, assist the image classification training process non-trivially?
2 RELATED WORK
Learning rich and diverse feature representations is always desired while training CNNs for gaining stronger generalization ability. However, most existing works mainly focus on introducing handcrafted cost functions to implicitly pursue diversity (Tang, 2013), or modifying activation functions to increase model non-linearity (Jin et al., 2015) or constructing a more complex CNN architecture (Simonyan & Zisserman, 2014; He et al., 2015; Krizhevsky et al., 2012). Methods that explicitly encourage inherent diversity of CNN models are still rare so far.
Knowledge distillation (Hinton et al., 2015) can be seen as an effective way to learn more discriminative and diverse feature representations. The distillation process compresses knowledge and thus encourages a weak model to learn more diverse and discriminative features. However, knowledge distillation works in two stages which are isolated from each other and has to rely on pre-training a complicated teacher network model. This may introduce undesired computation overhead. In contrast, our proposed approach can learn a diverse network in a single stage without requiring an extra network model. Similar works, e.g. the Diversity Networks (Sra & Hosseini), also squeeze the knowledge by preserving the most diverse features to avoid harming the performance.
More recently, Cogswell et al. (2016) proposed the DeCov approach to reduce over-fitting risk of a deep neural network model by reducing feature covariance. DeCov also agrees with increasing generalization ability of a model by pursuing feature diversity. This is consistent with our motivation. However, DeCov penalizes the covariance in an unsupervised fashion and cannot utilize extra available annotations, leading to insignificant performance improvement over vanilla models (Cogswell et al., 2016).
Using privileged information to learn better features during the training process is similar in spirit with our method. Both our proposed method and Lapin et al. (2014) introduce privileged information to assist the training process. However, almost all existing works (Lapin et al., 2014; Lopez-Paz et al., 2016; Sharmanska et al., 2014) are based on SVM+ which only focuses on training a better classifier and is not able to do the end-to-end training for better features.
Several works (Andrew et al., 2013; Srivastava & Salakhutdinov, 2012) about canonical correlation analysis (CCA) for CNNs provide a way to constrain feature diversity. However, the goal of CCA
is to find linear projections for two random vectors that are maximally correlated, which is different from ours.
It is also worth to notice that simply adding a segmentation loss to image classification neural network is not equivalent to a GoCNN model. This is because image segmentation requires each pixel within the target area to be activated and the others stay silent for dense prediction, while GoCNN does not require the each pixel within the target area to be activated. GoCNN is specifically designed for classification tasks, not for segmentation ones. Moreover, our proposed GoCNN supports learning from partial privileged information wile the CNN above needs a fully annotated training set.
3 MODEL DIVERSITY OF CONVOLUTIONAL NEURAL NETWORKS
Throughout the paper, we use f(·) to denote a convolutional function (or filter) and k to index the layers in a multi-layer network. We use c(k) to denote the total number of convolutional functions at the k-th layer and use i and j to index different functions, i.e., f (k)i (·) denotes the i-th convolutional function at the k-th layer of the network. The function f maps an input feature map to another new feature map. The height and the width of a feature map output at the layer k are denoted as h(k) and w(k) respectively. We consider a network model consisting of N layers in total.
Under a standard CNN architecture, the elements within the same feature map are produced by the same convolutional function f (k)i and thus they represent the same type of features across different locations. Therefore, encouraging the feature variance or diversity within a single feature map does not make sense. In this work, our target is to enhance the diversity among different convolutional functions. Here we first give a formal description of model diversity for an N -layer CNN.
Definition 1 (Model Diversity). Let f (k)i denote the i-th convolutional function at the k-th layer of a neural network model, and then the model diversity of the k-th layer is defined as
ζ(k) , 1− 1 c(k) 2 c(k)∑ i,j=1 cor ( f (k) i , f (k) j ) . (1)
Here the operator cor(·, ·) denotes the statistical correlation.
In other words, the inherent diversity of a network model that we are going to maximize is evaluated across all the convolutional functions within the same layer.
The most straightforward way to maximize the above diversity for each layer is to directly maximize the quantity of ζ(k) during training the network. However, it is quite involved to optimize the hard diversity in (1) due to the large combination number of different convolutional functions. Thus, we propose to solve this problem by learning the convolutional functions in different groups separately. Different functions from different groups are uncorrelated to each other and we do not need to consider their correlation. Suppose the convolutional functions at each layer are partitioned into m different groups, denoted as G = {G1, . . . , Gm}. Then, we instead maximize the following Group-wise Model Diversity.
Definition 2 (Group-wise Model Diversity). Given a pre-defined group partition set G = {G1, . . . , Gm} of convolutional functions at a specific layer, the group-wise model diversity of this layer is defined as
ζ(k)g , 1− 1
c(k) 2 |G|∑ s,t=1 ∑ i∈Gs,j∈Gt cor ( f (k) i , f (k) j ) .
Instead of directly optimizing the model diversity, we consider optimizing the group-wise model diversity by finding a set of orthogonal groups {G∗1, . . . , G∗m}, where convolutional functions within each group are uncorrelated with others within different groups. In the scenario of image representation learning, one typical example of such orthogonal groups is the foreground group and background group pair — partitioning the functions into two groups and letting them learn features from foreground and background contents respectively.
In this work, we use segmentation annotation as privileged information for finding orthogonal groups of convolutional functions G∗1, . . . , G ∗ m. In particular, we derive the foreground and background segregation from the privileged information for an image. Then we partition convolutional functions at a specific layer of a CNN model into foreground and background groups respectively, and train a GoCNN model to learn the foreground and background features separately. Details about the architecture of the GoCNN and the training procedure of GoCNN are given in the following section.
4 GROUP ORTHOGONAL CONVOLUTIONAL NEURAL NETWORKS
We introduce the group orthogonal constraint to maximize group-wise diversity among different groups of convolutional functions explicitly by constructing a group orthogonal convolutional neural network (GoCNN). Details on the architecture of GoCNN are shown in Figure 1. GoCNN is built upon a standard CNN architecture. The convolutional functions at the final convolution layer are explicitly divided into two groups: the foreground group which concentrates on learning the foreground feature and the background group which learns the background feature. The output features of these two groups are then aggregated by a fully connected layer.
In the following subsections, we give more details of the foreground and background groups construction. After that, we will describe how to combine these two components and build them into a unified network architecture — the GoGNN.
4.1 FOREGROUND AND BACKGROUND GROUPS
To learn convolutional functions that are specific for foreground content of an image, we propose the following two constraints for the foreground group of functions. The first constraint forces the functions to be learned from the foreground only and free of any contamination from the background, and the second constraint encourages the learned functions to be discriminative for image classification.
We learn features that only lie in the foreground by suppressing any contamination from the background. As aforementioned, here we use the object segmentation annotations (denoted as Mask) as the privileged information in the training phase to help identify the background features where the foreground convolutional functions should not respond to. The background contamination is extracted by an extractor adopted on each feature map within the foreground group. In particular, we define an extractor ϕ(·, ·) as follows:
ϕ(f (k) i (x),Mask) , f (k) i (x) Mask, (2)
where x denotes the raw input and denotes the element-wise multiplication. In the above operator, we use the background object mask Maskb to extract background features. Each element in Maskb is equal to one if the corresponding position lies on a background object or zero otherwise. Here, we assume the masks are already re-sized to have compatible dimensionality with the output feature map f (k)i (x) by the interpolation method so that the element-wise multiplication is valid here. The extracted background features are then suppressed by a regression loss defined as follows:
min θ ∑ i ‖ϕ(f (k)i (x; θ),Maskb)‖F . (3)
Here θ parameterizes the convolution function f (k)i . Since the target value for this regression is zero, we also call it a suppression term. It will only suppress the response output by f (k)i at the locations outside the foreground.
For the second constraint, i.e., encouraging the functions to learn discriminative features, we simply use the standard softmax classification loss to supervise the learning phase.
The role of the background group is complementary to the foreground one. It aims to learn convolutional functions that are only specific for background contents. Thus, the functions within the background group have a same suppression term as in Eqn. (3), in which Maskb is replaced with Maskf to restrict the learned features to make them only lie in the background space. The Maskf is simply computed as Maskf = 1 − Maskb. Also, a softmax linear classifier is attached during training to guarantee that these learned background functions are useful for predicting image categories.
4.2 ARCHITECTURE AND IMPLEMENTATION DETAILS OF THE GOCNN
In GoCNN, the size ratio of foreground group and background group is fixed to be 3:1 during training, since intuitively the foreground contents are much more informative than the background contents in classifying images. A single fully connected layer (or multiple layers depending on the basic CNN architecture) is used to unify the functional learning within different groups and combine features learned from different groups. It aggregates the information from different feature spaces and produces the final image category prediction. More details are given in Figure 1.
Because we are dealing with the classification problem, a main classifier with a standard classification loss function is adopted at the top layer of GoCNN. In our experiments, the standard softmax loss is used for single-label image classification and the logistic regression loss is used for multiplelabel image classification, e.g., images from the Pascal VOC dataset (Everingham et al., 2010).
During the testing stage, parts unrelated to the final main output will be removed, as shown in Figure 1 (b). Therefore, in terms of testing, neither extra parameters nor extra computational cost is introduced. The GoCNN is exactly the same as the adopted CNN in the testing phase.
In summary, for an incoming training sample, it passes through all the layers to the final convolution layer. Then the irrelevant features for each group (foreground or background) will be filtered out by privileged segmentation masks. Those filtered features will then flow into a suppressor (see Eqn. (3)). For the output features from each group, it will flow up along two paths: one leads to the group-wise classifier, and the other one leads to the main classifier. The three gradients from the suppressors, the group-wise classifiers and the main classifier will be used for updating the network parameters.
Applications with Incomplete Privileged Information Our proposed GoCNN can also be applied for semi-supervised learning. When only a small subset of images have the privileged seg-
mentation annotations in a dataset, we simply set the segmentations of images without annotations to be Maskf = Maskb = 1, where 1 is the matrix with all of its elements being 1. In other words, we disable both the suppression terms (ref. Eqn. (3)) on foreground and background parts as well as the extractors on the back propagation path. By doing so, fully annotated training samples with privileged information will supervise GoCNN to learn both discriminative and diverse features while the samples with only image tags only guide GoCNN to learn category discriminative features.
5 EXPERIMENTS
5.1 EXPERIMENT SETTINGS AND IMPLEMENTATION DETAILS
Datasets We evaluate the performance of GoCNN in image classification on two benchmark datasets, i.e., the ImageNet (Deng et al., 2009) dataset and the Pascal VOC 2012 dataset (Everingham et al., 2010).
• ImageNet ImageNet contains 1,000 fine-grained classes with about 1,300 images for each class and 1.2 million images in total, but without any image segmentation annotations. To collect privileged information, we randomly select 130 images from each class and manually annotate the object segmentation masks for them. Since our focus is on justifying the effectiveness of our proposed method, rather than pushing the state-of-the-art, we only collect privileged information for 10% data (overall 130k training images) to show performance improvement brought by our model. We call the new dataset consisting of these segmented images as ImageNet-0.1m. For evaluation, we use the original validation set of ImageNet which contains 50,000 images. Note that neither our baselines nor the proposed GoCNN needs segmentation information in testing phase.
• PASCAL VOC 2012 The PASCAL VOC 2012 dataset contains 11,530 images from 20 classes. For the classification task, there are 5,717 images for training and 5,823 images for validation. We use this dataset to further evaluate the generalization ability of different models including GoCNN trained on the ImageNet-0.1m: we pre-train the evaluated models on the ImageNet0.1m dataset and fine-tune them using the logistic regression loss on PASCAL VOC 2012 training set. We evaluate their performance on the validation set.
The Basic Architecture of GoCNN In our experiments, we use the ResNet (He et al., 2015) as the basic architecture to build GoCNN. Since the deepest ResNet contains 152 layers which will cost several weeks to train, we choose a light version of architecture (ResNet-18 (He et al., 2015)) that contains 18 layers as our basic model for most cases. We also use the ResNet-152 (He et al., 2015) for experiments on the full ImageNet dataset. The final convolution layer gives a 7× 7 output and is pooled into a 1× 1 feature map by average pooling. Then a fully connected layer is added to perform linear classification. The used loss function for the single class classification on ImageNet dataset is the standard softmax loss. When performing multi-label classification on PASCAL VOC, we use the logistic regression loss.
Training and Testing Strategy We use MXNet (Chen et al., 2015) to conduct model training and testing. The GoCNN weights are initialized as in He et al. (2015) and we train GoCNN from scratch. Images are resized with a shorter side randomly sampled within [256, 480] for scale augmentation and 224×224 crops are randomly sampled during training (He et al., 2015). We use SGD with base learning rate equal to 0.1 at the beginning and reduce the learning rate by a factor of 10 when the validation accuracy saturates. For the experiments on ResNet-18 we use single node with a minibatch size of 512. For the ResNet-152 we use 48 GPUs with mini-batch size of 32 for each GPU. Following He et al. (2015), we use a weight decay of 0.0001 and a momentum of 0.9 in the training.
We evaluate the performance of GoCNN on two different testing settings: the complete privileged information setting and the partial privileged information setting. We perform 10-crop testing (Krizhevsky et al., 2012) for the complete privileged information scenario, and do a single crop testing for the partial privileged information scenario for convenience.
Compared Baseline Models Our proposed GoCNN follows the Learning Using Privileged Information (LUPI) paradigm (Lapin et al., 2014), which exploits additional information to facilitate
learning but does not require extra information in testing. There are a few baseline models falling into the same paradigm that we can compare with. One is the SVM+ method (Pechyony & Vapnik, 2011) and the other one is the standard model (i.e., the ResNet-18). We simply refer to ResNet-18 by baseline if no confusion occurs. In the experiments, we implement the SVM+ using the code provided by Pechyony & Vapnik (2011) with default parameter settings and linear kernel. We follow the scheme as described in Lapin et al. (2014) to train the SVM+ model. More concretely, we train multiple one-versus-rest SVM+ models upon the deep features extracted from both the entire images and the foreground regions (used as the privileged information). We use the averaged pooling over 10 crops on the feature maps before the FC layer as the deep feature for training SVM+. It is worth noting that all of these models (including SVM+ and GoCNN) use a linear classifier and thus have the same number of parameters, or more concretely, GoCNN does not require more parameters than SVM+ and the vanilla ResNet.
5.2 TRAINING MODELS WITH COMPLETE PRIVILEGED INFORMATION
In this subsection, we consider the scenario where every training sample has complete privileged segmentation information. Firstly, we evaluate the performance of our proposed GoCNN on the ImageNet-0.1m dataset. Table 1 summarizes the accuracy of different models. As can be seen from the results, given the complete privileged information, our proposed GoCNN presents much better performance than compared models. The group orthogonal constraints successfully regularize the learned feature to be within the foreground and background. The trained GoCNN thus presents a stronger generalization ability. It is also interesting (although not surprising) to observe that, when foreground features with background features are combined, the performance of GoCNN can be further improved from 49.60% to 50.39% in terms of top-1 accuracy. One can observe that the background information indeed benefits object recognition to some extent. To further investigate the contribution of each component within GoCNN to final performance, we conduct another experiment and show the results in Table 2. In the experiments, we purposively prevent the gradient propagation from the other components except the one being investigated during training, and perform another setting on the baseline method where the background is removed and only the foreground object is reserved in each training sample, noted as Baseline-obj. Comparing the result of Full GoCNN between different classifiers, we can see that learning background features can actually improve the overall performance. And when we compare the Fg classifier between Baseline-obj, Only Fg and Full GoCNN, we can see the importance of the background information in training more robust and richer foreground features.
Secondly, to verify the effectiveness of learning features in two different groups with our proposed method, we visualize the maximum activation value within each group of feature maps of several testing images. The feature maps are generated by the final convolution layer with 384 × 384 resolution input testing images. Then, the final convolution layer gives 12 × 12 output maps. We aggregate feature maps within the same group into one feature map by max operation. As can be seen from Figure 2, foreground and background features are well separated and the result looks just like the semantic segmentation mask. Compared with the baseline model, more neurons are activated in our proposed method in the two orthogonal spaces. This indicates that more diverse and discriminative features are learned in our framework compared with the baseline method. Finally, we further evaluate the generalization ability of our proposed method on the PASCAL VOC dataset. It is well known that an object shares many common properties with others even if they are not from the same category. A well-performing CNN model should be able to learn robust features rather than just fit the training images. In this experiment, we fine-tune different models on the PASCAL VOC images to test whether the learned features are able to generalize well to another dataset. Note that
we add another convolution layer with a 1 × 1 kernel size and 512 outputs as an adaptive layer on all models. It is not necessary to add such a layer in networks without a residual structure (He et al., 2015). As can be seen from Table 3, our proposed network shows better results and higher average precision across all categories, which means our proposed GoCNN learns more representative and richer features that are easier to transfer from one domain to another.
5.3 TRAINING GOCNN WITH PARTIAL PRIVILEGED INFORMATION
In this subsection, we investigate the performance of different models with only using partial privileged information. The experiment is also conducted on the ImageNet-0.1m dataset. We evaluate the performance of our proposed GoCNN by varying the percentage of privileged information (i.e., percentage of training images with segmentation annotations) from 20% to 100%.
The validation accuracies of GoCNN and the baseline model (i.e., the ResNet-18) are shown in Table 4. From the results, one can observe that with the increasing percentage of privileged information, the accuracy will continuously increase until the percentage of privileged information reaches 80%. The performance on increasing the percentage from 40% to 100% is only 0.71% compared with 0.92% on the increasing from 20% to 40%. This is probably because the suppression losses are more effective than we expected; that is, with very little guidance from the suppression
loss, the network can already be able to separate foreground and background features and explore new features within each group.
To verify the effectiveness of GoCNN on very large training dataset with more complex CNN architectures, we conducted another experiment on the complete ImageNet-1k dataset with only 10% privileged information, and we use the 152-layer ResNet as our basic model. As can be seen from Table 5, our proposed GoCNN achieves 21.8% top-1 error while the vanilla ResNet-152 has 23.0% top-1 error. Such performance boost is consistent with the results shown in Table 4, which again confirms the effectiveness of the GoCNN.
6 DISCUSSIONS
Based on our experimental results, we can also provide answers to the following two important questions.
Does background information indeed help object recognition for deep learning methods? Based on our experiments, we give a positive answer. Intuitively, background information may provide some “hints” for object recognition. However, though several works (Song et al., 2011; Russakovsky et al., 2012) have proven the usefulness of background information when using handcraft features, few works have studied the effectiveness of background information on deep learning methods for object recognition tasks. Based on the experimental results shown in Table 2, both the foreground classification accuracy and the overall classification accuracy can be further boosted with our proposed framework. This means that the background deep features can also provide useful information for foreground object recognition.
Can a more precise annotation with richer information, e.g., segmentation annotation, assist the image classification training process? The answer is clearly yes. In fact, in recent years, several works have explored how object detection and segmentation can benefit each other (Dai et al., 2015; Hariharan et al., 2014). However, none of existing works has studied how image segmentation information can help train a better classification deep neural network. In this work, by treating the segmentation annotations as the privileged information, we first demonstrate a possible way to utilize segmentation annotations to assist image classification training.
7 CONCLUSION
We proposed a group orthogonal neural network for image classification which encourages learning more diverse feature representations. Privileged information is utilized to train the proposed GoCNN model. To the best of our knowledge, we are the first to explore how to use image segmentation as privileged information to assist CNN training for image classification. | 1. What is the main contribution of the paper regarding decorrelated neurons?
2. How does the proposed approach differ from previous works, such as masking features during training and testing?
3. What are the strengths and weaknesses of the paper's demonstration of improvement in classification on a mid-scale classification example?
4. How does the reviewer assess the generalizability of the approach across different datasets and networks?
5. Are there any concerns or suggestions for additional experiments to support the paper's claims? | Review | Review
The starting point of this work is the understanding that by having decorrelated neurons (e.g. neurons that only fire on background, or only on foreground regions) one provides independent pieces of information to the subsequent decisions. As such one gives "complementary viewpoints" of the input to the subsequent layers, which can be thought of as performing ensembling/expert combination within the model, rather than using an ensemble of networks.
For this, the authors propose a sensible method to decorrelate the activations of intermediate neurons, with the aim of delivering complementary inputs to the final classification layers: they split intermediate neurons to a "foreground" and a "background" subset, and append side-losses that force them to be zero on background and foreground pixels respectively.
They demonstrate that this can improve classification on a mid-scale classification example (a fraction of imagenet, and a ResNet with 18, rather than 150 layers), when compared to a "vanilla" baseline that does not use these losses.
I enjoyed reading the paper because the idea is simple, smart, and seems to be effective.
But there are a few concerns;
-firstly, the way of doing this seems very particular to vision. In vision one knows that masking the features (during both training and testing) helps, e.g. https://arxiv.org/abs/1412.1283
To be fair, this is not truly the same thing as what the authors are doing, because in the reference above the masking is computed during both training and testing, while here it is used as a method of decorrelating neurons at training time.
But I understand that to the broader iclr community this may seem as "yet another vision-specific trick", while to the vision community one would ask why not just use the mask during both training and testing, since one can compute it in the first place.
More importantly, the evaluation is quite limited; the authors use only one network (18 rather than 150 layers) and only part of imagenet for testing. They do get a substantial boost, but it is not clear if this will transfer to more data/layers.
The authors could at least have also tried CIFAR-10/100. I would expect to see some more results during the rebuttal period. |
ICLR | Title
Inhibition-augmented ConvNets
Abstract
Convolutional Networks (ConvNets) suffer from insufficient robustness to common corruptions and perturbations of the input, unseen during training. We address this problem by including a form of response inhibition in the processing of the early layers of existing ConvNets and assess the resulting representation power and generalization on corrupted inputs. The considered inhibition mechanism consists of a non-linear computation that is inspired by the push-pull inhibition exhibited by some neurons in the visual system of the brain. In practice, each convolutional filter (push) in the early layers of conventional ConvNets is coupled with another filter (pull), which responds to the preferred pattern of the corresponding push filter but of opposite contrast. The rectified responses of the push and pull filter pairs are then combined by a linear function. This results in a representation that suppresses responses to noisy patterns (e.g. texture, Gaussian, shot, distortion, and others) and accentuates responses to preferred patterns. We deploy the layer into existing architectures, (Wide-)ResNet and DenseNet, and propose new residual and dense push-pull layers. We demonstrate that ConvNets that embed this inhibition into the initial layers are able to learn representations that are robust to several types of input corruptions. We validated the approach on the ImageNet and CIFAR data sets and their corrupted and perturbed versions, ImageNet-C/P and CIFAR-C/P. It turns out that the push-pull inhibition enhances the overall robustness and generalization of ConvNets to corrupted and perturbed input data. Besides the improvement in generalization, notable is the fact that ConvNets with push-pull inhibition have a sparser representation than conventional ones without inhibition. The code and trained models will be made available.
1 INTRODUCTION
After AlexNet was proposed, several other more sophisticated Convolutional Networks (ConvNets), e.g. VGGNet (Simonyan & Zisserman, 2014), GoogleNet (Szegedy et al., 2015), ResNet (He et al., 2015), DenseNet (Huang et al., 2017), consecutively achieved lower error rates on classification benchmarks such as CIFAR (Krizhevsky, 2009), SVHN (Netzer et al., 2011) and ImageNet (Krizhevsky et al., 2012). The witnessed reduction of the classification error, however, was not accompanied by a similar increase of generalization and robustness to common corruptions or perturbations in the test images. When test images contain artefacts that are not present in the training data (e.g. shot noise, JPEG compression, blur), the performance of SOTA networks, like ResNet or DenseNet, degrade relatively more than that of AlexNet (Hendrycks & Dietterich, 2019).
Robustness of DeepNets and ConvNets is gaining attention from researchers in machine learning and computer vision. One point of analysis concerns adversarial attacks that consist of very small, visually imperceptible perturbations of the input signals such that confuse the classification model (Akhtar & Mian, 2018). Several adversarial attacks were proposed based on the knowledge of the classification models (Goodfellow et al., 2015) or on iterative methods (Kurakin et al., 2016; Madry et al., 2018). Class-specific and universal black box attacks (Moosavi-Dezfooli et al., 2017) were also proposed. As new types of adversarial perturbations are designed, defense algorithms need to be developed (Lu et al., 2017; Metzen et al., 2017).
In contrast to adversarial perturbations, common corruptions and perturbations are modifications of the input signals due to typical artefacts generated by sensor noise, illumination changes, transformations determined by changes of camera perspective (e.g. rotations or elastic transformations), quality loss due to compression, among others. These can be considered as average cases of ad-
versarial attacks and are very common in computer vision tasks. In this study, we focus on the robustness of ConvNets to common corruptions and perturbations and demonstrate how the use of inhibition-augmented filters increases the robustness of existing ConvNet models. We use a layer that implements a local-response suppression mechanism (Strisciuglio et al., 2020), named pushpull, to replace some convolutional layers in existing ConvNets. The design of this layer was inspired by neurophysiological evidence of the push-pull inhibition exhibited by some neurons in the early part of the visual system of the brain (Hirsch et al., 1998). Lauritzen & Miller (2003) stated that ‘push-pull inhibition [...] acts to sharpen spatial frequency tuning, [...] and increase the stability of cortical activity’. These neurons have inhibitory interneurons with receptive fields of opposite polarity. The interneurons suppress the responses of the associated neurons in noisy local patterns; that is they pull the push responses. From an engineering point of view, the push-pull inhibition can be considered as a band-pass filter. A computational model of the push-pull inhibition was introduced in image processing operators for contour and line (Azzopardi et al., 2014; Strisciuglio et al., 2019) detection that are robust to various types of noise. Early results on augmenting ConvNets with push-pull inhibition were reported in (Strisciuglio et al., 2020), where only the first convolutional layer of existing networks was replaced and preliminary experiments on the MNIST and CIFAR images were reported.
We deploy a push-pull layer in the state-of-the-art (wide-)ResNet and DenseNet architectures by replacing the convolutional layer in the entry layer of the networks and all convolutional layers inside the first residual or dense block. The number of layers and parameters of the modified networks remains the same of their original counterpart. We train several models with and without the push-pull layers, using the images from ImageNet and CIFAR training sets, and test them on the ImageNetC/P and CIFAR-10-C/P benchmark test sets (Hendrycks & Dietterich, 2019), which contain images with several types of corruption and perturbation. Furthermore, we combine the push-pull layer with other techniques to improve the robustness performance of ConvNets, namely the data augmentation techniques AutoAugment (Cubuk et al., 2018) and cutout (Devries & Taylor, 2017), and low-pass anti-aliasing filters (Zhang, 2019), of which we provide more details in Section 2. We show how combined data- and architecture-related actions have jointly a positive effect to further improve the robustness and generalization performance of existing models.
Our contributions are twofold: a) new residual and dense layers in which push-pull filters replace the original convolutions, b) a thorough analysis of the impact of the push-pull inhibition on the robustness of existing models, and of its combination with other strategies for model robustness.
The rest of the paper is organized as follows. We discuss related works in Section 2 and provide details of the push-pull layer and modified residual and dense layers in Section 3. We report the experiments and results in Section 4 and Section 5. Finally, we draw conclusions in Section 6.
2 RELATED WORKS
One common approach to increase the generalization of existing models is data augmentation. It provides a mechanism to increase the robustness to certain transformations included in the training process. One type of data augmentation, namely added Gaussian noise, does not guarantee robustness to other types of noise, e.g. shot noise (Zheng et al., 2016). In Vasiljevic et al. (2016), for instance, it was demonstrated that using one blurring type for data augmentation during training does not result in a model that is robust to other types of blurring. Moreover, in Geirhos et al. (2018), it was observed that heavy data augmentation can cause underfitting. Devries & Taylor (2017) proposed a technique named cutout that masks out random square patches of the training images and increases network performance. Although cutout reduces the classification error on benchmark test sets, we noticed that it does not improve robustness to common corruptions and perturbations. Lopes et al. (2019) proposed PatchGaussian, which combines the regularization abilities of cutout with data augmentation using Gaussian noise. Randomly located patches are selected from the input images and corrupted with Gaussian noise. Autoaugment (Cubuk et al., 2018; Lim et al., 2019) is a very effective strategy to learn an optimal set of augmentation policies from training data. Together with increased classification rate on benchmark test sets, it was shown to improve the robustness of the concerned models to various corruptions and perturbations (Yin et al., 2019). Recently, AugMix was proposed. It uses diverse augmentations and a Jensen-Shannon divergence consistency loss function to train models that have better generalization and uncertainty estimates (Hendrycks et al., 2020).
Data augmentation is in some cases very powerful. However, these techniques work around architectural or structural weaknesses of ConvNets. For example, using high-frequency noise for data augmentation (e.g. Gaussian noise) makes the network focuses on low frequency features that are discriminant instead of becoming explicitly robust to high-frequency corruptions (Yin et al., 2019). In order to improve the intrinsic robustness of ConvNets, it would be preferable to deploy architectural elements that are able to better detect the features of interest also when the input data is corrupted. For instance, common downsampling methods, like max-pooling and strided convolutions, introduce aliasing as they ignore the sampling theorem of Nyquist. Zhang (2019) used a low-pass filter before down-sampling and showed that it improves classification results on ImageNet and robustness to common corruptions of the input images. A layer of Gabor filters was recently shown to increase network robustness to adversarial attacks (Pérez et al., 2020).
3 PUSH-PULL INHIBITION IN CONVNETS
A ConvNet constitutes of L layers, each of which performs a non-linear transformation H(x) of the input x. In some architectures, H(x) is a composite function of convolutions, batch normalization (BN), rectifier linear unit (ReLU) and pooling. For example, a residual layer contains two convolutions, batch normalizations, ReLUs and one identity connection. We use a composite layer, called push-pull, that consists of two convolutions with kernels of different size and two ReLU functions.
3.1 PUSH-PULL LAYER
The response of the push-pull layer (Figure 1) is defined as the linear combination of the rectified responses of two convolutions of the input signal, one with an excitatory (push) kernel k(·) and one with an inhibitory (pull) kernel −k̂(·) (Strisciuglio et al., 2020). The responses of the push and pull components are rectified by means of the ReLU function, which introduces non-linearity, and combined: the pull response is subtracted from (i.e. it suppresses) the push response. In the forward step of the learning algorithm, the weights of each pull kernel are computed by upsampling and negating the weights of the corresponding push kernel. Since the pull kernels are directly inferred from the push kernels, the number of trainable parameters does not change. In the backward step, however, the implementation of the push-pull layer ensures that the gradient is back-
propagated to both push and pull components. While only the weights of the push kernels are updated, the effect of the gradient back-propagation incorporates the pull responses. We compute the response R of a push-pull layer to an input x as:
R(x) = Θ (x ? k)− αΘ ( x ? (−k̂h↑) ) (1)
where Θ(·) is the ReLU function, k is the push kernel and k̂h↑ is its upsampled (by a factor h) and negated version, i.e. the pull kernel. We set h = 2, that is the pull kernel has double width and height of the push kernel. The parameter α is a weighting factor for the inhibition response, which we set to 1 (i.e. same relevance as the push response). The decision of having pull kernels larger than the push kernels is motivated by neurophysiological findings (Li et al., 2012).
The combined effect of the push and pull kernels results in an increased robustness to the detection of features of interest in corrupted input signals. Figure 2 illustrates an example of the response maps of a conventional convolution filter and those of a push-pull filter on images corrupted with noise. The response maps of the push-pull filter on corrupted images correlate better with that of the clean image in comparison to the corresponding response maps of the convolution filter. For this illustration we selected a convolutional and a push kernel that have the same shape and whose parameters are learned in the first layer of a ResNet model, without and with push-pull layers
respectively, on the CIFAR data set. This allows to better and more directly demonstrate the effect that the pull inhibitory component has on the quality of the computed feature maps. The spatial frequency selectivity of the push-pull filter is sharper than that of a conventional convolutional layer, as it focuses on the frequency of interest and suppresses those outside a certain learned sub-band. In this way, noise induced artefacts can be filtered out and clearer feature maps are computed for further processing in subsequent layers of the network.
3.2 PUSH-PULL IN RESIDUAL AND DENSE LAYERS
We extended the implementation of the residual and dense composite layers, originally proposed by He et al. (2015) and Huang et al. (2017), respectively. In both cases, we substituted the convolutional layers with push-pull layers, which resulted in the push-pull residual layer and the push-pull dense layer.
The push-pull inhibition is a phenomenon that is exhibited by certain neurons in the early visual system. Evidence of it has been observed in area V1 of the visual cortex (Lauritzen & Miller, 2003), which can be compared to the early layers of a ConvNet. Thus, motivated by neuro-scientific findings, we deploy the modified push-pull residual and dense layers in the first block of the considered architectures. In principle, the push-pull layer may be used to substitute a convolutional layer at any depth in a ConvNet. In practice, however, we experienced that deploying push-pull layers at deeper layers considerably changes the training dynamics of the model, not contributing to result improvement, and the nominal hyperparameters used to train the original network need to be changed.
4 EXPERIMENTS
In the following, we refer as ResNet-L to a residual network with L layers (He et al., 2015) and as DenseNet-L-k to a densely connected network with L layers and growing factor k (Huang et al., 2017). The notation ‘-pp’ in the model name indicates the use of the push-pull layer that replaces the first convolutional layer. We use the suffix ‘-b1’ to denote that the concerned model deploys push-pull residual or dense layers in the first block of the network as well.
4.1 DATA SETS
We trained several models on the CIFAR and ImageNet data sets, and tested them on the original test sets and on those provided with the CIFAR-C/P and ImageNet-C/P benchmarks (Hendrycks & Dietterich, 2019). The CIFAR-C and ImageNet-C data sets are composed of 75 test sets of corrupted images, generated by applying 15 types of corruption with 5 levels of severity, to the original images. The corruptions are arranged in four categories: noise (Gaussian, shot and impulse noise), blur (defocus, glass, motion and zoom blur), weather (snow, frost, fog, brightness) and digital distortions (contrast, elastic transformation, pixelate, jpeg compression). The perturbations in CIFAR-P and ImageNet-P have a character of temporality. Starting from the corrupted images in CIFAR-C and ImageNet-C, sequences of 30 frames were generated by applying consecutive small perturbations. For each image in the original CIFAR and ImageNet validation sets, ten types of perturbed sequences
were created (Gaussian and shot noise, motion and zoom blur, snow, brightness, translate, rotate, tilt, scale). More details about the -C and -P data sets are reported by Hendrycks & Dietterich (2019).
4.2 EVALUATION
We applied the evaluation protocol proposed by Hendrycks & Dietterich (2019). Besides the classification error on the CIFAR and ImageNet, we evaluate the corruption error C, i.e. the classification error achieved on the CIFAR-C and ImageNet-C sets, and the flip-probability FP , that is the probability that the outcome of the classification changes between consecutive frames, on the sequences in the CIFAR-P and ImageNet-P sets. We compute the mean and relative corruption errors mCE and rCE and the normalized flip probability, i.e. the flip rate FR. For details about these metrics, we refer the reader to Appendix B and to the paper of Hendrycks & Dietterich (2019).
4.3 EXPERIMENTS
CIFAR. We trained several ResNet and DenseNet models and their versions with the push-pull layer, using the images from the CIFAR training set to which we applied a light data augmentation, consisting only of center-cropping and random horizontal flipping (Lee et al., 2015). For both (wide)ResNet- and DenseNet-based models we optimized the parameters with stochastic gradient descent (SGD), Nesterov momentum equals to 0.9 and weight decay equals to 10−4. We trained ResNet architectures for 200 epochs, with batch size equals to 128 and initial learning rate of 0.1, which we decreased by a factor of 10 at epochs 80, 120 and 160. We trained DenseNet models for 350 epochs with batch size of 64 and initial learning rate of 0.1, which we decreased by a factor of 10 at epochs 150, 225 and 300. These hyperparameters are the same used to train the original ResNet and DenseNet models. The use of the push-pull layer in the first block of existing architectures does not require to change the hyperparameters used to train the original networks.
Furthermore, we experimented by enhancing the considered architectures with the anti-aliased sampling layers proposed by Zhang (2019) for improvement of shift-invariance. We compared the performance of what the push-pull layer and the anti-aliased sampling layer, and showed that their combination contributes to a further improvement of the robustness of ConvNets. We experimented with anti-aliasing filters of size 2×2, 3×3 and 5×5. We also trained WideResNet models (Zagoruyko & Komodakis, 2016) with and without the push-pull layer, using different data augmentations, namely cutout (Devries & Taylor, 2017), AutoAugment (Cubuk et al., 2018), and cutout plus AutoAugment. We show that the push-pull layer, although already providing the network with intrinsic robustness to corruptions, can be combined with data augmentation to achieve further robustness.
ImageNet. We considered ResNet-18 and ResNet-50 models in which we deployed the pushpull layer as a replacement of the first convolutional layer and of the convolutions inside the first residual block. In the case of ResNet-50, we replaced only the middle convolution of the bottleneck layers. We trained the networks using SGD optimization with Nesterov moment equals to 0.9 and weight decay of 5 · 10−4 for 90 epochs, with an initial learning rate of 0.1 that we decreased by a factor of 10 after 30, 60 and 85 epochs. We used a batch size of 256. For comparison purposes, we trained ResNet-18 and ResNet-50 models augmented with the anti-aliased sampling layers proposed by Zhang (2019) using the same hyperparameters and schedule of the learning rate that we employed for the training of the push-pull networks. For ResNet-50 with anti-aliasing, due to GPU memory limitations, we reduced the batch size to 128.
5 RESULTS AND DISCUSSION
5.1 CIFAR
Effect of the push-pull inhibition layer. In Table 1, we report the results of ResNet and DenseNet networks on the CIFAR and CIFAR-C/P test sets. The lower the numbers the better the performance. We tested architectures with different number of layers (and growing factors, for DenseNet). For each architecture, we trained the original model with convolutional layers and two models with the push-pull layer, one (-pp) with the push-pull layer in the first layer only and the other (-b1) with the push-pull layer replacing all convolutions in the first block of the network as well. The
measurements mCE, rCE and mFR (see Appendix B) are normalized with respect to the results of the baseline AlexNet.
The push-pull layer contributes to a substantial improvement of the robustness of the networks to corruptions and perturbations of the input images. Its effect, especially when it is deployed in the first block of existing ConvNets, is noticeable, with a reduction of the mean corruption error mCE by more than 10% in some cases. The improvement in robustness trades off with a generally small reduction of the classification accuracy on original (clean) test data. The presence of the pushpull layer has an important effect in reducing the relative corruption error rCE, i.e. the average gap between the classification error on clean data (on which the model is trained) and corrupted data. It indicates that models with push-pull inhibition are better in generalizing to data with heavy corruptions. In some cases, the relative error is decreased by more than 30%. The flip rate (mFR), computed on the CIFAR-P sequences, also improved for all models with push-pull layers.
Networks with the push-pull layers generally show an improvement of robustness to corruptions with high-frequency components. For instance, ResNet20-pp and ResNet20-b1 reduced the CE by 10% to 25% on noise (Gaussian, shot, impulse), up to 18% on glass blur, and between 15% and 20% on pixelate and jpeg compression corruptions with respect to that of the original ResNet20. Marginal improvements or lower robustness are obtained on low-frequency corruptions, such as defocus, motion and zoom blur (+2/6% of CE), snow, frost and fog (between −4% and +4% of CE). For detailed results per corruption and perturbation type, we refer the reader to Appendix C.
Inhibition and anti-aliasing layers. In order to further improve the robustness of existing ConvNets, we show that the push-pull inhibition can be used in combination with other techniques. We couple it with the anti-aliased sampling layer proposed by Zhang (2019), which showed to improve robustness to some image corruptions next to re-enforcing the shift-invariance property of ConvNets. It consists of a low-pass filter inserted before the pooling or strided convolutional operations. We used low-pass filters of different sizes, namely 2× 2 (Rect-2), 3× 3 (Tri-3) and 5× 5 (Bin-5). Figure 3 illustrates the results achieved by (Wide)ResNet models extended with the push-pull inhibition and anti-aliased sampling layers. In many cases the combined effects of the push-pull and anti-aliasing layers resulted in lowering either the corruption error CE or the flip probability FP . The Rect-2 filter ( marker), in particular, is effective on all models (with and without the pushpull layer), on the contrary of the Tri-3 and Bin-5 filters, which have larger sizes and in some cases worsen the robustness of the networks. This is due to the relatively small sizes of the images in the
CIFAR data set with respect to the low-pass filters, which overly smooth the intermediate response maps in the ConvNets. Models with anti-aliasing filters achieved less reduction of corruption error with respect to inhibition-augmented ConvNets: up to 10% on noise and around 2% on motion blur, but perform better (> 6%) than networks with inhibition on the translate perturbation.
Inhibition and data augmentation. We show that the embedding of the push-pull layer within existing architectures can be combined with data augmentation techniques to further improve the model robustness to different corruptions. In Table 2, we report the robustness results (CE and FR) of WideResNet models (with and without push-pull inhibition) trained with data augmentation techniques. We performed experiments with AutoAugment, cutout and a combination of them. The cutout augmentation does not increase the robustness of the models to input corruptions, while AutoAugment contributes to a noticeable reduction of the corruption error. This is in line with the findings by Yin et al. (2019). In all cases, inhibition layers contribute to further increase the robustness of networks trained with data augmentation. The best CE and FR values are obtained by models that deploy the push-pull layer in the first layer and in the first residual block, and are trained using AutoAugment plus cutout augmentations.
5.2 IMAGENET
In Table 3, we show the results achieved by ResNet models on the ImageNet validation set and on ImageNet-C/P benchmarks. The mCE and FR values are normalized with respect to the results of the baseline models ResNet-18 and ResNet-50, respectively.
On one hand, we observed a general decrease of performance of fine-tuned ResNet-18 models with push-pull and anti-aliasing layers on the ImageNet-C data set. On the other hand, the push-pull layer improves the robustness of ResNet-50 on the corruptions in ImageNet-C and on the perturbations in ImageNet-P. Results of the ResNet models extended with anti-aliasing layers decrease on the ImageNet-C data set with respect to those of the original ResNet model, while showing improved robustness to the perturbation of the ImageNet-P data set. Also for the ImageNet-C/P benchmarks, we noticed that the improvements of robustness achieved by the models with push-pull inhibition are directed towards high-frequency corruptions. Nonetheless, the complementary robustness shown by ResNet-50 architectures with push-pull inhibition and anti-aliasing layers suggest that they can be jointly deployed in existing models, as we also observed in the CIFAR experiments.
5.3 DISCUSSION
The push-pull inhibition component that we implemented into ConvNets allows for a sharper detection of features of interest in cases when the input signal is corrupted. In neuroscience studies, it was found that while neurons that exhibit push-pull inhibition achieve similar activations to preferred stimuli when compared to neurons that do not exhibit such inhibition, the fact that their activations are suppressed in noisy areas result in a more pronounced distinction between patterns of interest from others and in a sparser representation (Li et al., 2012). These may be considered as two desirable properties in machine learning models. In Figure 4, we show the histograms of the weights learned in convolutional and push-pull layers of several models. The push-pull layer acts as a regularizer and allows to learn generally sparser representations. We conjecture that one effect of the push-pull layer is the learning of smoother discriminant mapping functions due to sparser representations. This contributes to the increased generalization showed on corrupted data (Srivastava et al., 2014). While the resulting models may fit slightly less than models without push-pull inhibition on training data, they generalize better to unknown corruptions and perturbations.
The push-pull layer contributes to the improvement of the robustness of a particular architecture to input corruptions, with negligible reduction of classification accuracy on the original data. When classification systems are deployed in real-world tasks, where the acquisition of data is subject to corruptions due to sensor uncertainty/deterioration or to changing environments, it is desirable to provide them with mechanisms that can effectively deal with such degradation. From a computational cost point of view, during the training stage, the pull kernels are derived at each iteration by upsampling and negating the corresponding push kernels, which relatively slows down the training process. On a Nvidia V100 gpu, one training iteration with a batch of 256 ImageNet images takes 0.78, 0.85 and 0.95 seconds for ResNet50, ResNet50-pp and ResNet50-b1, respectively. In the testing stage, the pull kernels are loaded only once when the model is initiated. Extra computations are due only to the processing of the pull component. When the push-pull layer is used only in the first layer of the network or in the first block, the difference in computations is negligible. For inference, the processing takes 0.66, 0.67 and 0.63 seconds, respectively. The lowest test time of ResNet50-b1 is attributable to the sparser representation learned in the push-pull layer.
6 CONCLUSIONS
We demonstrated that the inclusion of a response inhibition mechanism, inspired by the push-pull neurons of the visual system of the brain, into existing ConvNets improves their robustness to corruptions and perturbations of the input that are not present in the training data. The improvement is mainly attributable to the filters in the push-pull layer that are less sensitive to noise, and have a regularization effect that makes the network learn a sparser representation. We carried out experiments on the CIFAR-C/P and ImageNet-C/P data sets and the results that we achieved (more than 10% of reduction of the error on corrupted and perturbed data, and 1% of reduction of the corruption error for fine-tuned networks) demonstrate the effectiveness of the push-pull inhibition layer.
A PUSH-PULL RESIDUAL AND DENSE LAYERS
In Figure 5, we show the structure of the composite residual and dense layers augmented with the push-pull inhibition. The architecture of the layers is the same of the original residual and dense layers, with the difference that all convolutions are replaced by push-pull filters.
B PERFORMANCE METRICS
We denote byEM the classification error of a modelM on the original clean (without perturbations) test set. Let us denote by EMc,s the corruption error achieved by the model M on images with corruption c ∈ C and severity level s, and denote by EMc the average error achieved across all severity sets of a given corruption c.
We compute the mean corruption error mCE as the average of the normalized errors achieved on all corruptions and severity levels. We use the error Ebaselinec,s achieved by a baseline network on the set of data with corruption c and severity s as a normalization factor for the corruption error:
mCEM = 1
|C| |C|∑ c=1 ∑5 s=1E M c,s∑5 s=1E baseline c,s
(2)
As proposed by Hendrycks & Dietterich (2019), we also evaluate the degradation of the classifier performance on corrupted data with respect to the results achieved on clean data and compute the relative mean corruption error rCE. It measures the relative gap between the error on clean and corrupted data and it is computed as:
rCEM = 1
|C| |C|∑ c=1
∑5 s=1 ( EMc,s − EM )∑5 s=1 ( Ebaselinec,s − Ebaseline
) (3) As a measure of performance on the CIFAR-P and ImageNet-P test sets, we compute the flip probability FP . It measures the probability that the outcome of the classification changes between consecutive frames of a sequence S due to a perturbation p. Formally, it is defined as:
FPMp = Px∼S(f(j) 6= f(xj−1)) (4) where xj is the j-th frame of the perturbation sequence. Similarly to the mCE, the flip rate FRMp = FP M p /FP baseline p is the normalized version of the FP .
C DETAILED RESULTS
C.1 RESULTS PER CORRUPTION TYPE
In Table 4, we show the detailed results achieved on the corruption sets in the CIFAR-C data set by network with and without push-pull inhibition. We also report the results achieved by networks that deploy the anti-aliased sampling filters of Zhang (2019). The push-pull filters are robust to high-frequency corruptions and can be effectively combined with anti-aliasing filters.
In Table 5, we report detailed results per corruption type achieved by WideResNet models for which the convolutions in the first layer and in the first residual block were replaced by push-pull filters, and that are trained using different types of data augmentation, namely the cutout (Devries & Taylor,
2017) and AutoAugment (Cubuk et al., 2018) techniques. The use of the push-pull filters can be combined with data augmentation to further improve the robustness of existing models.
In Table 6, we report the detailed classification error achieved per corruption type on ImageNet-C by models that deploy the push-pull layers in comparison with those achieved by models that use the antialiased sampling module proposed by Zhang (2019). The push-pull layers provide robustness for high-frequency corruptions, that is more evident in the case of the ResNet-50 models.
C.2 RESULTS PER PERTURBATION TYPE
We report in Table 7 the detailed performance results, in terms of flip probability, per perturbation type achieved on the CIFAR-P data set by ResNet models, of different depth, that deploy the pushpull layers, the anti-aliasing layers or a combination of them. The flip probability measures the likelihood that a model changes its predictions on consecutive frames of a video on which an incremental corruption is applied. The push-pull inhibition layers contribute to a substantial improvement of stability against high-frequency components, and benefit from the effect of the anti-aliasing filters on the translate perturbation.
In Table 8 we report detailed flip probability results per perturbation type achieved by WideResNet models that deploy push-pull filters in the first layer and in the first residual block to replace the original convolutions, and that are trained using different types of data augmentation, namely the cutout (Devries & Taylor, 2017) and AutoAugment (Cubuk et al., 2018) techniques. The push-pull inhibition and the different data augmentation techniques provide complementary contributions to the improvement of the model robustness and stability to perturbations. This demonstrates that the push-pull layer can be deployed in existing architectures and can be combined with other approaches to further improve the classification performance.
In Table 9 we report the detailed flip probability results achieved by ResNet models augmented with push-pull inhibition filters in comparison to those achieved by models that deploy anti-aliasing layers. The push-pull filters, especially in the case of ResNet-50, contribute to the improvement of the stability of the network prediction on consecutively perturbed frames and are particularly effective on high-frequency corruptions. | 1. What is the main contribution of the paper regarding adversarial robustness?
2. What are the strengths and weaknesses of the proposed CNN architecture?
3. How does the reviewer assess the improvement of the current work compared to prior art?
4. What is the relationship between the proposed architecture and Strisciuglio et al. 2020?
5. What are the reviewer's concerns regarding the comparison with state-of-the-art works?
6. Are there any recent baselines that could be used for comparison?
7. Could the authors provide more explanation on the logic behind selecting a larger push kernel?
8. Could the authors clarify which dataset was used in figure-3 caption? | Review | Review
This paper proposes a CNN architecture for addressing the problem of adversarial robustness. This architecture consists of complementary (named push-pull) filters and is claimed to suppress the responses to noise patterns and consequently make the network more robust against common natural image perturbations. Experiments on CIFAR10 and IMAGENET datasets are presented.
My main concern is regarding the comparison with the state-of-the-art and added value in comparison to Strisciuglio et al. 2020. While the proposed architecture seem to be effective at improving the robustness against natural image perturbations it’s not clear where these improvements stand with respect to prior art. While many of the prior work were concerned with image-augmentations, there are a number of works that focus on architectural proposals like [1] and [2]. It seems necessary to see those comparisons to better assess the value of the current work. Some recent baselines could be [3] and [4] that have shown the additive effect of many architectural and image augmentation methods.
Regarding the comparison to Strisciuglio et al. 2020, it seems to me that this paper have already studied the effect of the proposed architecture on the robust accuracy of the network on MNIST and CIFAR10. So the authors need to clarify what the contribution of the current paper is in more detail and in comparison to this paper.
Other comments:
please clarify in the figure-3 caption which dataset was used
please explain the logic behind selecting a larger push kernels and its relation to the neurophysiological observations from Li et al. 2012
[1] Richard Zhang. Making convolutional networks shift- invariant again. In ICML, 2019.
[2] Xiang Li, Wenhai Wang, Xiaolin Hu, and Jian Yang. Selec- tive kernel networks. 2019.
[3] Hendrycks, Dan, et al. "The many faces of robustness: A critical analysis of out-of-distribution generalization." arXiv preprint arXiv:2006.16241 (2020).
[4] Lee, Jungkyu, Taeryun Won, and Kiho Hong. "Compounding the performance improvements of assembled techniques in a convolutional neural network." arXiv preprint arXiv:2001.06268(2020). |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.